-
2283Trusting virtual trustEthics and Information Technology 7 (3): 167-180. 2005.Can trust evolve on the Internet between virtual strangers? Recently, Pettit answered this question in the negative. Focusing on trust in the sense of ‘dynamic, interactive, and trusting’ reliance on other people, he distinguishes between two forms of trust: primary trust rests on the belief that the other is trustworthy, while the more subtle secondary kind of trust is premised on the belief that the other cherishes one’s esteem, and will, therefore, reply to an act of trust in kind (‘trust-res…Read more
-
2016Copyright or copyleft?: An analysis of property regimes for software developmentResearch Policy 34 (10): 1511-1532. 2005.Two property regimes for software development may be distinguished. Within corporations, on the one hand, a Private Regime obtains which excludes all outsiders from access to a firm's software assets. It is shown how the protective instruments of secrecy and both copyright and patent have been strengthened considerably during the last two decades. On the other, a Public Regime among hackers may be distinguished, initiated by individuals, organizations or firms, in which source code is freely exc…Read more
-
1573Open Source Production of Encyclopedias: Editorial Policies at the Intersection of Organizational and Epistemological TrustSocial Epistemology 26 (1): 71-103. 2012.The ideas behind open source software are currently applied to the production of encyclopedias. A sample of six English text-based, neutral-point-of-view, online encyclopedias of the kind are identified: h2g2, Wikipedia, Scholarpedia, Encyclopedia of Earth, Citizendium and Knol. How do these projects deal with the problem of trusting their participants to behave as competent and loyal encyclopedists? Editorial policies for soliciting and processing content are shown to range from high discretion…Read more
-
1008Open Source Software: A New Mertonian Ethos?In Anton Vedder (ed.), Ethics and the Internet, Intersentia. 2001.Hacker communities of the 1970s and 1980s developed a quite characteristic work ethos. Its norms are explored and shown to be quite similar to those which Robert Merton suggested govern academic life: communism, universalism, disinterestedness, and organized scepticism. In the 1990s the Internet multiplied the scale of these communities, allowing them to create successful software programs like Linux and Apache. After renaming themselves the `open source software' movement, with an emphasis on s…Read more
-
884NAVIGATING BETWEEN CHAOS AND BUREAUCRACY: BACKGROUNDING TRUST IN OPEN-CONTENT COMMUNITIESIn Karl Aberer, Andreas Flache, Wander Jager, Ling Liu, Jie Tang & Christophe Guéret (eds.), 4th International Conference, SocInfo 2012, Lausanne, Switzerland, December 5-7, 2012. Proceedings, Springer. 2012.Many virtual communities that rely on user-generated content (such as social news sites, citizen journals, and encyclopedias in particular) offer unrestricted and immediate ‘write access’ to every contributor. It is argued that these communities do not just assume that the trust granted by that policy is well-placed; they have developed extensive mechanisms that underpin the trust involved (‘backgrounding’). These target contributors (stipulating legal terms of use and developing etiquette, both…Read more
-
729The use of software tools and autonomous bots against vandalism: eroding Wikipedia’s moral order?Ethics and Information Technology 17 (3): 175-188. 2015.English - language Wikipedia is constantly being plagued by vandalistic contributions on a massive scale. In order to fight them its volunteer contributors deploy an array of software tools and autonomous bots. After an analysis of their functioning and the ‘ coactivity ’ in use between humans and bots, this research ‘ discloses ’ the moral issues that emerge from the combined patrolling by humans and bots. Administrators provide the stronger tools only to trusted users, thereby creating a new h…Read more
-
552From open-source software to Wikipedia: ‘Backgrounding’ trust by collective monitoring and reputation trackingEthics and Information Technology 16 (2): 157-169. 2014.Open-content communities that focus on co-creation without requirements for entry have to face the issue of institutional trust in contributors. This research investigates the various ways in which these communities manage this issue. It is shown that communities of open-source software—continue to—rely mainly on hierarchy (reserving write-access for higher echelons), which substitutes (the need for) trust. Encyclopedic communities, though, largely avoid this solution. In the particular case of …Read more
-
522Profiling vandalism in Wikipedia: A Schauerian approach to justificationEthics and Information Technology 18 (2): 131-148. 2016.In order to fight massive vandalism the English- language Wikipedia has developed a system of surveillance which is carried out by humans and bots, supported by various tools. Central to the selection of edits for inspection is the process of using filters or profiles. Can this profiling be justified? On the basis of a careful reading of Frederick Schauer’s books about rules in general (1991) and profiling in particular (2003) I arrive at several conclusions. The effectiveness, efficiency, and r…Read more
-
467Internet-Based Commons of Intellectual Resources: An Exploration of their VarietyIn Jacques Berleur, Markku I. Nurminen & John Impagliazzo (eds.), IFIP; Social Informatics: An Information Society for All? In Remembrance of Rob Kling Vol 223, Springer. 2006.During the two last decades, speeded up by the development of the Internet, several types of commons have been opened up for intellectual resources. In this article their variety is being explored as to the kind of resources and the type of regulation involved. The open source software movement initiated the phenomenon, by creating a copyright-based commons of source code that can be labelled `dynamic': allowing both use and modification of resources. Additionally, such a commons may be either p…Read more
-
215Trusting the (ro)botic other: By assumption?SIGCAS Computers and Society 45 (3): 255-260. 2015.How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as a cons…Read more
-
174Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Philosophy and Technology 31 (4): 525-541. 2018.Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the li…Read more
-
149Coercion or empowerment? Moderation of content in Wikipedia as 'essentially contested' bureaucratic rulesEthics and Information Technology 14 (2): 123-135. 2012.In communities of user-generated content, systems for the management of content and/or their contributors are usually accepted without much protest. Not so, however, in the case of Wikipedia, in which the proposal to introduce a system of review for new edits (in order to counter vandalism) led to heated discussions. This debate is analysed, and arguments of both supporters and opponents (of English, German and French tongue) are extracted from Wikipedian archives. In order to better understand …Read more
-
115Online diaries: Reflections on trust, privacy, and exhibitionism (review)Ethics and Information Technology 10 (1): 57-69. 2008.Trust between transaction partners in cyberspace has come to be considered a distinct possibility. In this article the focus is on the conditions for its creation by way of assuming, not inferring trust. After a survey of its development over the years (in the writings of authors like Luhmann, Baier, Gambetta, and Pettit), this mechanism of trust is explored in a study of personal journal blogs. After a brief presentation of some technicalities of blogging and authors’ motives for writing their …Read more
-
109How can contributors to open-source communities be Trusted? On the assumption, inference, and substitution of trustEthics and Information Technology 12 (4): 327-341. 2010.Open-source communities that focus on content rely squarely on the contributions of invisible strangers in cyberspace. How do such communities handle the problem of trusting that strangers have good intentions and adequate competence? This question is explored in relation to communities in which such trust is a vital issue: peer production of software (FreeBSD and Mozilla in particular) and encyclopaedia entries (Wikipedia in particular). In the context of open-source software, it is argued that…Read more
-
74The disciplinary power of predictive algorithms: a Foucauldian perspectiveEthics and Information Technology 21 (4): 319-329. 2019.Big Data are increasingly used in machine learning in order to create predictive models. How are predictive practices that use such models to be situated? In the field of surveillance studies many of its practitioners assert that “governance by discipline” has given way to “governance by risk”. The individual is dissolved into his/her constituent data and no longer addressed. I argue that, on the contrary, in most of the contexts where predictive modelling is used, it constitutes Foucauldian dis…Read more
-
62Emerging roles for third parties in cyberspaceEthics and Information Technology 3 (4): 267-276. 2001.In `real' space, third partieshave always been useful to facilitatetransactions. With cyberspace opening up, it isto be expected that intermediation will alsodevelop in a virtual fashion. The articlefocuses upon new cyberroles for third partiesthat seem to announce themselves clearly.First, virtualization of the market place haspaved the way for `cybermediaries', who brokerbetween supply and demand of material andinformational goods. Secondly,cybercommunication has created newuncertainties conce…Read more
-
39Big data and algorithmic decision-makingAcm Sigcas Computers and Society 47 (3): 39-53. 2017.Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Can transparency contribute to restoring accountability for such systems? Several objections are examined: the loss of privacy when data sets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of competitive edge, and the limited gains in answerability to be expected since sophis…Read more
-
39How can contributors to open-source communities be trusted? On the assumption, inference, and substitution of trustEthics and Information Technology 12 (4): 327-341. 2010.Open-source communities that focus on content rely squarely on the contributions of invisible strangers in cyberspace. How do such communities handle the problem of trusting that strangers have good intentions and adequate competence? This question is explored in relation to communities in which such trust is a vital issue: peer production of software (FreeBSD and Mozilla in particular) and encyclopaedia entries (Wikipedia in particular). In the context of open-source software, it is argued that…Read more
-
33Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Philosophy and Technology 31 (4): 525-541. 2018.Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves (“gaming the system” in particular), the potential loss of com…Read more
-
31Trusting the (ro)botic otherAcm Sigcas Computers and Society 45 (3): 255-260. 2015.How may human agents come to trust artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as a consequence, they approa…Read more
-
30Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?Philosophy and Technology 34 (4): 1135-1193. 2021.The term ‘responsible AI’ has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the ‘Partnership on AI’. By means of a comprehensive web search, two questions are addressed by this study: Did …Read more
-
17Algorithmic decision-making employing profiling: will trade secrecy protection render the right to explanation toothless?Ethics and Information Technology 24 (2). 2022.Algorithmic decision-making based on profiling may significantly affect people’s destinies. As a rule, however, explanations for such decisions are lacking. What are the chances for a “right to explanation” to be realized soon? After an exploration of the regulatory efforts that are currently pushing for such a right it is concluded that, at the moment, the GDPR stands out as the main force to be reckoned with. In cases of profiling, data subjects are granted the right to receive meaningful info…Read more
-
14Trusting Virtual TrustEthics and Information Technology 7 (3): 167-180. 2005.Can trust evolve on the Internet between virtual strangers? Recently, Pettit answered this question in the negative. Focusing on trust in the sense of ‘dynamic, interactive, and trusting’ reliance on other people, he distinguishes between two forms of trust: primary trust rests on the belief that the other is trustworthy, while the more subtle secondary kind of trust is premised on the belief that the other cherishes one’s esteem, and will, therefore, reply to an act of trust in kind (‘trust-res…Read more
-
8Online diaries: Reflections on trust, privacy, and exhibitionismEthics and Information Technology 10 (1): 57-69. 2008.Trust between transaction partners in cyberspace has come to be considered a distinct possibility. In this article the focus is on the conditions for its creation by way of assuming, not inferring trust. After a survey of its development over the years (in the writings of authors like Luhmann, Baier, Gambetta, and Pettit), this mechanism of trust is explored in a study of personal journal blogs. After a brief presentation of some technicalities of blogging and authors’ motives for writing their …Read more