•  546
    Gdzie jesteś, HAL?
    Przegląd Filozoficzny 22 (2): 167-184. 2013.
    Sztuczna inteligencja pojawiła się jako dziedzina badawcza ponad 60 lat temu. Po spektakularnych sukcesach na początku jej istnienia oczekiwano pojawienia się maszyn myślących w ciągu kilku lat. Prognoza ta zupełnie się nie sprawdziła. Nie dość, że maszyny myślącej dotąd nie zbudowano, to nie ma zgodności wśród naukowców czym taka maszyna miałaby się charakteryzować ani nawet czy warto ją w ogóle budować. W artykule tym postaramy się prześledzić dyskusję metodologiczną towarzyszącą sztucznej int…Read more
  •  466
    The Logical Structure of Intentional Anonymity
    with Michał Barcz and Adam Wierzbicki
    Diametros 16 (60): 1-17. 2019.
    It has been noticed by several authors that the colloquial understanding of anonymity as mere unknown-ness is insufficient. This common-sense notion of anonymity does not recognize the role of the goal for which the anonymity is sought. Starting with the distinction between the intentional and unintentional anonymity (which are usually taken to be the same) and the general concept of the non-coordinatability of traits, we offer a logical analysis of anonymity and identification (understood as de…Read more
  •  371
    Privacy as an Asset
    In Marcellus Mindel, Kelly Lyons & Joe Wigglesworth (eds.), Proceedings of the 27th CASCON Conference, Ibm/acm. pp. 266-271. 2017.
    Many attempts to define privacy have been made over the last century. Early definitions and theories of privacy had little to do with the concept of information and, when they did, only in an informal sense. With the advent of information technology, the question of a precise and universally acceptable definition of privacy in this new domain became an urgent issue as legal and business problems regarding privacy started to accrue. In this paper, I propose a definition of informational privacy t…Read more
  •  186
    On the Relationship Between the Aretaic and the Deontic
    Ethical Theory and Moral Practice 14 (5): 493-501. 2011.
    There are two fundamental classes of terms traditionally distinguished within moral vocabulary: the deontic and the aretaic. The terms from the first set serve in the prescriptive function of a moral code. The second class contains terms used for a moral evaluation of an action. The problem of the relationship between the aretaic and the deontic has not been discussed often by philosophers. It is, however, a very important and interesting issue: any normative ethical theory which takes as basic …Read more
  •  163
    The Logical Structure of Stoic Ethics
    Apeiron 45 (3): 221-237. 2012.
    This paper is an attempt to reject the classical interpretation of Stoic ethics as virtue ethics. The typical assumptions of this interpretation, that virtue is the supreme good and that happiness can be reduced to virtue, are questioned. We first lay out the conceptual framework of Stoic philosophy and present an outline of their reduction of happiness to virtue. The main part of the paper provides an argument for reinterpretation of virtue as rationality. In the last part of the paper – more s…Read more
  •  152
    The Frame Problem in Artificial Intelligence and Philosophy
    Filozofia Nauki 21 (2): 15-30. 2013.
    The field of Artificial Intelligence has been around for over 60 years now. Soon after its inception, the founding fathers predicted that within a few years an intelligent machine would be built. That prediction failed miserably. Not only hasn’t an intelligent machine been built, but we are not much closer to building one than we were some 50 years ago. Many reasons have been given for this failure, but one theme has been dominant since its advent in 1969: The Frame Problem. What looked initiall…Read more
  • Prawda i interpretacja (review)
    Studia Filozoficzne 266 (1). 1988.
  • Some Technical Challenges in Designing an Artificial Moral Agent
    In Artificial Intelligence and Soft Computing. ICAISC 2020. Lecture Notes in Computer Science, vol 12416. Springer, . pp. 481-491. 2020.
    Autonomous agents (robots) are no longer a subject of science fiction novels. Self-driving cars, for example, may be on our roads within a few years. These machines will necessarily interact with the humans and in these interactions must take into account moral outcome of their actions. Yet we are nowhere near designing a machine capable of autonomous moral reasoning. In some sense, this is understandable as commonsense reasoning turns out to be very hard to formalize. In this paper, we identify…Read more