查看原文
其他

逻辑学学术速递[12.24]

格林先生MrGreen arXiv每日学术速递 2022-04-26

Update!H5支持摘要折叠,体验更佳!点击阅读原文访问arxivdaily.com,涵盖CS|物理|数学|经济|统计|金融|生物|电气领域,更有搜索、收藏等功能!


cs.LO逻辑学,共计4篇


【1】 A Manifesto for Applicable Formal Methods
标题:关于适用的形式化方法的一个宣言
链接:https://arxiv.org/abs/2112.12758

作者:Mario Gleirscher,Jaco van de Pol,Jim Woodcock
摘要:Formal methods were frequently shown to be effective and, perhaps because of that, practitioners are interested in using them more often. Still, these methods are far less applied than expected, particularly, in critical domains where they are strongly recommended and where they have the greatest potential. Our hypothesis is that formal methods still seem not to be applicable enough or ready for their intended use. In critical software engineering, what do we mean when we speak of a formal method? And what does it mean for such a method to be applicable both from a scientific and practical viewpoint? Based on what the literature tells about the first question, with this manifesto, we lay out a set of principles that when followed by a formal method give rise to its mature applicability in a given scope. Rather than exercising criticism of past developments, this manifesto strives to foster an increased use of formal methods to the maximum benefit.

【2】 Preprocessing in Inductive Logic Programming
标题:归纳逻辑程序设计中的预处理
链接:https://arxiv.org/abs/2112.12551

作者:Brad Hunter
机构:Linacre College, University of Oxford, A dissertation submitted for the degree of, Master of Mathematics and Foundations of Computer Science, arXiv:,.,v, [cs.LG] , Dec
备注:91 pages, 6 figures, Masters thesis
摘要:Inductive logic programming is a type of machine learning in which logic programs are learned from examples. This learning typically occurs relative to some background knowledge provided as a logic program. This dissertation introduces bottom preprocessing, a method for generating initial constraints on the programs an ILP system must consider. Bottom preprocessing applies ideas from inverse entailment to modern ILP systems. Inverse entailment is an influential early ILP approach introduced with Progol. This dissertation also presents $\bot$-Popper, an implementation of bottom preprocessing for the modern ILP system Popper. It is shown experimentally that bottom preprocessing can reduce learning times of ILP systems on hard problems. This reduction can be especially significant when the amount of background knowledge in the problem is large.

【3】 Algorithmic Probability of Large Datasets and the Simplicity Bubble Problem in Machine Learning
标题:大数据集的算法概率与机器学习中的简单性泡沫问题
链接:https://arxiv.org/abs/2112.12275

作者:Felipe S. Abrahão,Hector Zenil,Fabio Porto,Klaus Wehmuth
机构:oratory for Scientific Computing (LNCC),-, Petr´opolis, RJ, Brazil., for the Natural and Digital Sciences, Paris, France., The Alan Turing Institute, British Library,QR, Euston Rd, Lon-, don NW,DB. Algorithmic Dynamics Lab, Unit of Computational
摘要:When mining large datasets in order to predict new data, limitations of the principles behind statistical machine learning pose a serious challenge not only to the Big Data deluge, but also to the traditional assumptions that data generating processes are biased toward low algorithmic complexity. Even when one assumes an underlying algorithmic-informational bias toward simplicity in finite dataset generators, we show that fully automated, with or without access to pseudo-random generators, computable learning algorithms, in particular those of statistical nature used in current approaches to machine learning (including deep learning), can always be deceived, naturally or artificially, by sufficiently large datasets. In particular, we demonstrate that, for every finite learning algorithm, there is a sufficiently large dataset size above which the algorithmic probability of an unpredictable deceiver is an upper bound (up to a multiplicative constant that only depends on the learning algorithm) for the algorithmic probability of any other larger dataset. In other words, very large and complex datasets are as likely to deceive learning algorithms into a "simplicity bubble" as any other particular dataset. These deceiving datasets guarantee that any prediction will diverge from the high-algorithmic-complexity globally optimal solution while converging toward the low-algorithmic-complexity locally optimal solution. We discuss the framework and empirical conditions for circumventing this deceptive phenomenon, moving away from statistical machine learning towards a stronger type of machine learning based on, or motivated by, the intrinsic power of algorithmic information theory and computability theory.

【4】 A Point-free Perspective on Lax extensions and Predicate liftings
标题:松弛扩张与谓词提升的无点透视
链接:https://arxiv.org/abs/2112.12681

作者:Sergey Goncharov,Dirk Hofmann,Pedro Nora,Lutz Schröder,Paul Wild
摘要:In this paper we have a fresh look at the connection between lax extensions and predicate liftings of a functor from the point of view of quantale-enriched relations. Using this perspective, in particular we show that various fundamental concepts and results arise naturally and their proofs become very elementary. Ultimately, we prove that every lax extension is represented by a class of predicate liftings and discuss several implications of this result.

机器翻译,仅供参考

点击“阅读原文”获取带摘要的学术速递

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存