•  133
    Testing Significance in Bayesian Classifiers.
    with Marcelo de Souza Lauretto
    Frontiers in Artificial Intelligence and Applications 132 34-41. 2005.
    The Fully Bayesian Significance Test (FBST) is a coherent Bayesian significance test for sharp hypotheses. This paper explores the FBST as a model selection tool for general mixture models, and gives some computational experiments for Multinomial-Dirichlet-Normal-Wishart models.
  •  251
    Bayesian Evidence Test for Precise Hypotheses
    Journal of Statistical Planning and Inference 117 (2): 185-198. 2003.
    The full Bayesian signi/cance test (FBST) for precise hypotheses is presented, with some illustrative applications. In the FBST we compute the evidence against the precise hypothesis. We discuss some of the theoretical properties of the FBST, and provide an invariant formulation for coordinate transformations, provided a reference density has been established. This evidence is the probability of the highest relative surprise set, “tangential” to the sub-manifold (of the parameter space) that def…Read more
  •  199
    Generalized Line Criterion for Gauss-Seidel Method.
    with Manuel Valentim de Pera Garcia and Carlos Humes
    Computational and Applied Mathematics 22 (1): 91-97. 2003.
    We present a module based criterion, i.e. a sufficient condition based on the absolute value of the matrix coefficients, for the convergence of Gauss–Seidel method (GSM) for a square system of linear algebraic equations, the Generalized Line Criterion (GLC). We prove GLC to be the “most general” module based criterion and derive, as GLC corollaries, some previously know and also some new criteria for GSM convergence. Although far more general than the previously known results, the proof of GLC i…Read more
  •  207
    Enviromental genotoxicity evaluation: Bayesian approach for a mixture statistical model
    with Angela Maria de Souza Bueno, Carlos Alberto de Braganca Pereira, and Maria Nazareth Rabello-Gay
    Stochastic Environmental Research and Risk Assessment 16. 2002.
    The data analyzed in this paper are part of the results described in Bueno et al. (2000). Three cytogenetics endpoints were analyzed in three populations of a species of wild rodent – Akodon montensis – living in an industrial, an agricultural, and a preservation area at the Itajaí Valley, State of Santa Catarina, Brazil. The polychromatic/normochromatic ratio, the mitotic index, and the frequency of micronucleated polychromatic erythrocites were used in an attempt to establish a genotoxic profi…Read more
  •  215
    A new Evidence Test is applied to the problem of testing whether two Poisson random variables are dependent. The dependence structure is that of Holgate’s bivariate distribution. These bivariate distribution depends on three parameters, 0 < theta_1, theta_2 < infty, and 0 < theta_3 < min(theta_1, theta_2). The Evidence Test was originally developed as a Bayesian test, but in the present paper it is compared to the best known test of the hypothesis of independence in a frequentist framework. It i…Read more
  •  250
    A Weibull Wearout Test: Full Bayesian Approach
    with Telba Zalkind Irony, Marcelo de Souza Lauretto, and Carlos Alberto de Braganca Pereira
    Reliability and Engineering Statistics 5 287-300. 2001.
    The Full Bayesian Significance Test (FBST) for precise hypotheses is presented, with some applications relevant to reliability theory. The FBST is an alternative to significance tests or, equivalently, to p-ualue.s. In the FBST we compute the evidence of the precise hypothesis. This evidence is the probability of the complement of a credible set "tangent" to the sub-manifold (of the para,rreter space) that defines the null hypothesis. We use the FBST in an application requiring a quality control…Read more
  •  204
    Simulated Annealing with a Temperature Dependent Penalty Function.
    ORSA Journal on Computing 4 311-319. 1992.
    We formulate the problem of permuting a matrix to block angular form as the combinatorial minimization of an objective function. We motivate the use of simulated annealing (SA) as an optimization tool. We then introduce a heuristic temperature dependent penalty function in the simulated annealing cost function, to be used instead of the real objective function being minimized. Finally we show that this temperature dependent penalty function version of simulated annealing consistently outperforms…Read more
  •  191
    Active set Methods for Problems in Column Block Angular Form
    with Stephen A. Vavasis
    Computational and Applied Mathematics 12 (3): 199-226. 1993.
    We study active set methods for optimization problems in Block Angular Form (BAF). We begin by reviewing some standard basis factorizations, including Saunders' orthogonal factorization and updates for the simplex method that do not impose any restriction on the pivot sequence and maintain the basis factorization structured in BAF throughout the algorithm. We then suggest orthogonal factorization and updating procedures that allow coarse grain parallelization, pivot updates local to the affected…Read more
  •  143
    Real Attribute Learning Algorithm.
    with Marcelo de Souza Lauretto, Fabio Nakano, and Celma de Oliveira Ribeiro
    ISAS-SCI’98 2 315-321. 1998.
    This paper presents REAL, a Real-Valued Attribute Classification Tree Learning Algorithm. Several of the algorithm's unique features are explained by úe users' demands for a decision support tool to be used for evaluating financial operations strategies. Compared to competing algorithms, in our applications, REAL presents maj or advantages : (1) The REAL classification trees usually have smaller error rates. (2) A single conviction (or trust) measure at each leaf is more convenient than the trad…Read more
  •  230
    A Dynamic Software Certification and Verification Procedure
    with Carlos Alberto de Braganca Pereira
    SCI’99 Proceedings 2 426-435. 1998.
    in Oct-14-1998 ordinance INDESP-IO4 established the federal software certification and verification requirements for gaming machines in Brazil. The authors present the rationale behind these criteria, whose basic principles can find applications in several other software authentication applications.
  •  169
    Evidence and Credibility: Full Bayesian Significance Test for Precise Hypotheses.
    with Carlos Alberto de Braganca Pereira
    Entropy 1 (1): 69-80. 1999.
    A Bayesian measure of evidence for precise hypotheses is presented. The intention is to give a Bayesian alternative to significance tests or, equivalently, to p-values. In fact, a set is defined in the parameter space and the posterior probability, its credibility, is evaluated. This set is the “Highest Posterior Density Region” that is “tangent” to the set that defines the null hypothesis. Our measure of evidence is the complement of the credibility of the “tangent” region.
  •  154
    Significance Tests, Belief Calculi, and Burden of Proof in Legal and Scientific Discourse
    Frontiers in Artificial Intelligence and Applications 101 139-147. 2003.
    We review the definition of the Full Bayesian Significance Test (FBST), and summarize its main statistical and epistemological characteristics. We review also the Abstract Belief Calculus (ABC) of Darwiche and Ginsberg, and use it to analyze the FBST’s value of evidence. This analysis helps us understand the FBST properties and interpretation. The definition of value of evidence against a sharp hypothesis, in the FBST setup, was motivated by applications of Bayesian statistical reasoning to lega…Read more
  •  259
    TORC3: Token-Ring Clearing Heuristic for Currency Circulation
    with Carlos Humes, Marcelo de Souza Lauretto, Fabio Nakano, Carlos Alberto de Braganca Pereira, and Guilherme Frederico Gazineu Rafare
    AIP Conference Proceedings 1490 179-188. 2012.
    Clearing algorithms are at the core of modern payment systems, facilitating the settling of multilateral credit messages with (near) minimum transfers of currency. Traditional clearing procedures use batch processing based on MILP - mixed-integer linear programming algorithms. The MILP approach demands intensive computational resources; moreover, it is also vulnerable to operational risks generated by possible defaults during the inter-batch period. This paper presents TORC3 - the Token-Ring Cle…Read more
  •  252
    Auditable Blockchain Randomization Tool
    with Olivia Saa
    Proceedings 33 (17): 1-6. 2019.
    Randomization is an integral part of well-designed statistical trials, and is also a required procedure in legal systems. Implementation of honest, unbiased, understandable, secure, traceable, auditable and collusion resistant randomization procedures is a mater of great legal, social and political importance. Given the juridical and social importance of randomization, it is important to develop procedures in full compliance with the following desiderata: (a) Statistical soundness and computatio…Read more
  •  326
    Assessing Randomness in Case Assignment: The Case Study of the Brazilian Supreme Court.
    with Diego Marcondes and Claudia Peixoto
    Law, Probability and Risk 18 (2/3): 97-114. 2019.
    Sortition, i.e. random appointment for public duty, has been employed by societies throughout the years as a firewall designated to prevent illegitimate interference between parties in a legal case and agents of the legal system. In judicial systems of modern western countries, random procedures are mainly employed to select the jury, the court and/or the judge in charge of judging a legal case. Therefore, these random procedures play an important role in the course of a case, and should comply …Read more
  •  242
    Combining Optimization and Randomization Approaches for the Design of Clinical Trials
    with Victor Fossaluza, Marcelo de Souza Lauretto, and Carlos Alberto de Braganca Pereira
    Springer Proceedings in Mathematics and Statistics 118 173-184. 2015.
    t Intentional sampling methods are non-randomized procedures that select a group of individuals for a sample with the purpose of meeting specific prescribed criteria. In this paper we extend previous works related to intentional sampling, and address the problem of sequential allocation for clinical trials with few patients. Roughly speaking, patients are enrolled sequentially, according to the order in which they start the treatment at the clinic or hospital. The allocation problem consists in …Read more
  •  517
    Intentional Sampling by Goal Optimization with Decoupling by Stochastic Perturbation
    with Marcelo de Souza Lauretto, Fabio Nakano, and Carlos Alberto de Braganca Pereira
    AIP Conference Proceedings 1490 189-201. 2012.
    Intentional sampling methods are non-probabilistic procedures that select a group of individuals for a sample with the purpose of meeting specific prescribed criteria. Intentional sampling methods are intended for exploratory research or pilot studies where tight budget constraints preclude the use of traditional randomized representative sampling. The possibility of subsequently generalize statistically from such deterministic samples to the general population has been the issue of long standin…Read more
  •  184
    Randomization and Fair Judgment in Law and Science
    In Jose Acacio de Barros & Decio Krause (eds.), A True Polymath: A Tribute to Francisco Antonio Doria., College Publications. pp. 399-418. 2020.
    Randomization procedures are used in legal and statistical applications, aiming to shield important decisions from spurious influences. This article gives an intuitive introduction to randomization and examines some intended consequences of its use related to truthful statistical inference and fair legal judgment. This article also presents an open-code Java implementation for a cryptographically secure, statistically reliable, transparent, traceable, and fully auditable randomization tool.
  •  306
    Comments presented at the 35th International Seminar on the -- New Institutional Economics -- Empirical Methods for the Law; Syracuse, 2018.
  •  231
    Pragmatic Hypotheses in the Evolution of Science.
    with Luis Gustavo Esteves, Rafael Izbicki, and Rafael Stern
    Entropy 21 (9): 1-17. 2019.
    This paper introduces pragmatic hypotheses and relates this concept to the spiral of scientific evolution. Previous works determined a characterization of logically consistent statistical hypothesis tests and showed that the modal operators obtained from this test can be represented in the hexagon of oppositions. However, despite the importance of precise hypothesis in science, they cannot be accepted by logically consistent tests. Here, we show that this dilemma can be overcome by the use of pr…Read more
  •  182
    Simultaneous hypothesis tests can fail to provide results that meet logical requirements. For example, if A and B are two statements such that A implies B, there exist tests that, based on the same data, reject B but not A. Such outcomes are generally inconvenient to statisticians (who want to communicate the results to practitioners in a simple fashion) and non-statisticians (confused by conflicting pieces of information). Based on this inconvenience, one might want to use tests that satisfy lo…Read more
  •  187
    This article analyzes the role of entropy in Bayesian statistics, focusing on its use as a tool for detection, recognition and validation of eigen-solutions. “Objects as eigen-solutions” is a key metaphor of the cognitive constructivism epistemological framework developed by the philosopher Heinz von Foerster. Special attention is given to some objections to the concepts of probability, statistics and randomization posed by George Spencer-Brown, a figure of great influence in the field of radica…Read more
  •  216
    This paper has three main objectives: (a) Discuss the formal analogy between some important symmetry-invariance arguments used in physics, probability and statistics. Specifically, we will focus on Noether’s theorem in physics, the maximum entropy principle in probability theory, and de Finetti-type theorems in Bayesian statistics; (b) Discuss the epistemological and ontological implications of these theorems, as they are interpreted in physics and statistics. Specifically, we will focus on the …Read more
  •  220
    This article explores some open questions related to the problem of verification of theories in the context of empirical sciences by contrasting three epistemological frameworks. Each of these epistemological frameworks is based on a corresponding central metaphor, namely: (a) Neo-empiricism and the gambling metaphor; (b) Popperian falsificationism and the scientific tribunal metaphor; (c) Cognitive constructivism and the object as eigen-solution metaphor. Each of one of these epistemological fr…Read more
  •  230
    Can a Significance Test Be Genuinely Bayesian?
    with Carlos Alberto de Braganca Pereira and Sergio Wechsler
    Bayesian Analysis 3 (1): 79-100. 2008.
    The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.
  •  203
    Paraconsistent Sensitivity Analysis for Bayesian Significance Tests
    Lecture Notes in Artificial Intelligence 3171 134-143. 2004.
    In this paper, the notion of degree of inconsistency is introduced as a tool to evaluate the sensitivity of the Full Bayesian Significance Test (FBST) value of evidence with respect to changes in the prior or reference density. For that, both the definition of the FBST, a possibilistic approach to hypothesis testing based on Bayesian probability procedures, and the use of bilattice structures, as introduced by Ginsberg and Fitting, in paraconsistent logics, are reviewed. The computational and th…Read more
  •  233
    Factorization of Sparse Bayesian Networks
    with Ernesto Coutinho Colla
    Studies in Computational Intelligence 199 275-285. 2009.
    This paper shows how an efficient and parallel algorithm for inference in Bayesian Networks (BNs) can be built and implemented combining sparse matrix factorization methods with variable elimination algorithms for BNs. This entails a complete separation between a first symbolic phase, and a second numerical phase.
  •  186
    Emergent Semiotics in Genetic Programming and the Self-Adaptive Semantic Crossover
    with Rafael Inhasz
    Studies in Computational Intelligence 314 381-392. 2010.
    We present SASC, Self-Adaptive Semantic Crossover, a new class of crossover operators for genetic programming. SASC operators are designed to induce the emergence and then preserve good building-blocks, using metacontrol techniques based on semantic compatibility measures. SASC performance is tested in a case study concerning the replication of investment funds.
  •  160
    Decoupling, Sparsity, Randomization, and Objective Bayesian Inference
    Cybernetics and Human Knowing 15 (2). 2008.
    Decoupling is a general principle that allows us to separate simple components in a complex system. In statistics, decoupling is often expressed as independence, no association, or zero covariance relations. These relations are sharp statistical hypotheses, that can be tested using the FBST - Full Bayesian Significance Test. Decoupling relations can also be introduced by some techniques of Design of Statistical Experiments, DSEs, like randomization. This article discusses the concepts of decoupl…Read more
  •  285
    Language and the Self-Reference Paradox
    Cybernetics and Human Knowing 14 (4): 71-92. 2007.
    Heinz Von Forester characterizes the objects “known” by an autopoietic system as eigen-solutions, that is, as discrete, separable, stable and composable states of the interaction of the system with its environment. Previous articles have presented the FBST, Full Bayesian Significance Test, as a mathematical formalism specifically designed to access the support for sharp statistical hypotheses, and have shown that these hypotheses correspond, from a constructivist perspective, to systemic eigen-s…Read more