Endi Kalemaj, Debate Alumnus
Human rights are those legal and / or moral rights that all persons have simply as persons. In the current digital age, human rights are increasingly being met or violated in the online environment. a way of conceiving the relationship between human rights and information technology. I propose that we need a Statement of Digital Rights. As a step towards developing such a statement, I suggest a framework for thinking about how to ensure that human rights are met in digital contexts.
The rapid evolution of information and communication technology (ICT) and related digital communications over the last two decades has dramatically changed communication practices worldwide. This has had profound implications for human rights on a number of levels. First, communication technologies are presenting new ways to more fully realize our human rights. This is especially true of the right to freedom of expression. Second, ICT has provided human rights activists with new tools for protecting human rights. Internet access via mobile phones empowers citizens to communicate rights violations in real time with global audiences; social media tools connect human rights defenders around the world to increase collaboration and information sharing; Censorship bypass technologies allow people to bypass attempts to monitor and control the flow of information and communication. However, in addition to unleashing tremendous new opportunities for the protection and advancement of human rights, digital communications also present a number of serious challenges. These include direct threats to human rights, such as the development of increasingly sophisticated censorship and surveillance mechanisms. They also include deeper, structural problems, such as the persistence of digital divisions in access to infrastructure and communication capacities across geographical, gender and social lines.
The 2017 IEEE report on ethically aligned design for AI lists lists as its first principle that AI design should not violate international human rights. However, some AI systems are already violating rights Like that. For example, in March 2018, human rights investigators from the United Nations (UN) found that Facebook – and the algorithmically driven news source – exacerbated hate speech and instigated violence in Myanmar. During a U.S. Congress hearing in April 2018, Senator Patrick Leahy asked CEO Mark Zuckerberg about Facebook AI’s failure to uncover content in the face of possible genocide against Myanmar’s ethical Rohingya minority. While Zuckerberg initially told senators that more advanced AI tools would help solve the problem, he later admitted to investors that Facebook’s AI systems would not be able to detect “hate” in local contexts accurately. reasonable at any time.
The report by the Institute of Electrical and Electronics Engineers, the largest organization of technical professionals, lists as its first principle the non-violation of international human rights by artificial intelligence. However, some systems are already violating such rights. For example, in March 2018, human rights researchers from the United Nations found that Facebook and its algorithmically posted news resulted in the spread of hate speech and incitement to violence in Myanmar. (REUTERS 2018). During a hearing of the US Congress in April 2018, regarding this issue, Zuckenberg initially said that more advanced artificial intelligence is needed to solve the problem.
Enforcement of human rights can help identify and anticipate some of the most serious societal violations of artificial intelligence by creating and guiding the creation of technical and political safeguards to promote the most positive use of this intelligence. This can be achieved through the activation of international systems of human rights practices, including EU treaties, United Nations reports and advocacy initiatives to monitor social impacts and establish redress in the event of a violation of these rights.
China is creating systems to categorize people according to their social characteristics. This “Social Credit” system is being created to collect data from Chinese citizens and rate them according to their social credibility, as determined by the government. The system has punitive functions, such as “punishing” debtors, by displaying their faces on large screens in public spaces or by placing these individuals on the “black list” on train or air travel. (The Conversation, 2019¬) A common focus of the country’s existing pilot schemes is to establish a standardized system of reward and punishment based on a citizen’s credit score. Most pilot cities have used a points system. They all start with a base of 100 points. Citizens can earn bonus points of up to 200 by doing “good deeds”, such as getting involved in charity work or sharing and recycling garbage. In the city of Suzhou, for example, you can earn six points for blood donation.
Being a “good citizen” is well rewarded. In some regions, citizens with high social credit scores can enjoy fitness facilities, cheaper public transport, and shorter hospital stays. On the other hand, those with low scores may be restricted from traveling and accessing public services. At this stage, the points are linked to the citizen’s identification card number. But the Chinese Internet court has proposed an online identification system linked to social media accounts.
Publishing blacklisted citizens details online is a common practice, but some cities choose to take public shame to another level. Some provinces have used TV and LED screens in public spaces to expose people. In some regions, authorities have remotely personalized blacklisted debtor tones so that callers hear a message similar to “the person you are calling is a dishonest debtor.”
If successful, China’s reforms will allow its economy to take the lead in adapting to a dynamic world. But the sheer size of its ambitions (both global and local) also carries the risk that failures, if they occur, could have devastating consequences.
Privacy has also long been a major concern in areas involving government, business, academia and civil society organizations. The right to privacy is enshrined in Article 12 of the Universal Declaration of Human Rights, Article 17 of the International Covenant on Civil and Political Rights (ICCPR), national constitutions and national laws.
If the creators of artificial intelligence were to treat privacy as a fundamental human right, rather than simply an ethical preference, the technical terms and standards of privacy that already exist in the industry would be stronger. The Stanford study gives a brief overview of how artificial intelligence can threaten privacy through rampant data collection and through the ability to publish users anonymously. The protection of the right to privacy is essential for the enjoyment of a number of related rights, such as freedom of expression, of a social life, of participation in political groups, and of the right to information.
Another right that is also violated by artificial intelligence is the right to freedom of expression, which is especially important in an environment where social media platforms, through algorithms, decide “whose voices to listen to.” Freedom of expression is part of the fundamental human rights enshrined in Article 19 of the Universal Declaration of Human Rights. Social media platforms have already become the central place where public discussion takes place, but there is a strong debate about the role of platforms in moderating their content. With hate speech, fake news and media manipulation circulating on platforms like Facebook and Twitter, lawmakers and the public are urging companies to address the problem.
It will be inevitable that at some point along the way, humans will interact regularly with robots. Humanoid robots are evolving to do everything from working with astronauts in space to serving as your personal assistant.
Data published by the World Intellectual Property Organization stated that Chinese and American firms hold about 85 percent of the patents belonging to artificial intelligence worldwide. “It’s a good thing the EU is looking to legislate on how data can be better used when it comes to personal data, but it should always be up to consumers to decide if their data will be collected and how to distribute them ”.
Some actors have expressed concern that it will be difficult to define high-risk cases. They are also afraid that the compliance assessment envisaged by the EU will make the process more complex and bureaucratic.
The new laws will also have an impact on tech giants like Facebook, Google and Apple. On Monday, Facebook CEO Mark Zuckerberg met with EU officials as the company warned of the potential risks of the innovation regulation.
The Alter 3 robot, which has a human face and hands, which moves its arms very naturally, rotates them during the live performance of Keiichiro Shibuya’s opera “Scary Beauty” in the United Arab Emirates, he has replaced the conductor. The role of robots in our daily lives may increase, but it is up to us to decide how artificial intelligence can affect the human experience, where humans and androids create art together.
A survey conducted between four groups of experts in 2012/13 by AI researchers Vincent C Müller and philosopher Nick Bostrom reported a 50% chance that General Artificial Intelligence (AGI) would develop between 2040 and 2050, rising to 90% by 2075. the group went even further, predicting that the so-called ‘superintelligence’ – which Bostrom defines as “any intellect far exceeding the cognitive performance of humans in almost all areas of interest” – was expected about 30 years later. achieving AGI.
However, recent assessments by AI experts are more cautious. Pioneers in the field of modern AI research such as Geoffrey Hinton, Demis Hassabis and Yann LeCun say society is not close to developing AGI. Given the skepticism of the main lights in the field of modern AI and the very different nature of modern AI systems close to AGI, there is probably little basis for the fear that a general artificial intelligence will disrupt society in the near future. thus, some AI experts believe that such predictions are extremely optimistic given our limited understanding of the human brain and believe that AGI is still centuries away.