CHAPTER-5
Ethics & regulations
When we talk about AI, we can't help but talk about the danger it could represent, whether through Deep Faking, cybercrime, data theft... many points are still quite opaque. We will therefore analyze the current risks but also in our future.
​
​
DEEP FAKING
Deep Faking or Hypertrucage, which is a portmanteau of the words "Deep Learning" and "Fake". It is the imitation or transformation of people, videos, existing voices via an artificial intelligence in order to produce intox or humorous content. Deep Fakes are made by integral synthesis in Deep Learning, where AI stores millions of faces with their expressions through a system of algorithms, generating ultra-realistic human face videos. As we can now find Deep Fakes by Tom Cruise or Keanu Reeves.
​​
The danger with the Deep Fake is to spread nonsense, to use the image of a public but also private personality with the aim of defrauding or manipulating. The problem that will be generated will be to know if a video is real or not and therefore to lose confidence in everything that could be visible on the networks. The intox has always existed but now it becomes more pointed and can have political or economic stakes.
​
It would be interesting to create an AI capable of detecting Deep Fakes, by analyzing images and videos, and comparing them to a database, to demonstrate that the image was created or is real.
AI & Ethics​
It is important that when we talk about artificial intelligence ethical problems can arise, which is why we must also teach AI not to discriminate or standardize, which could have serious consequences on gender and ethnic inequalities.
​
This was the case of Amazon which in 2014, which carried out these recruitments via an AI, which by using the database of current employees, largely dominated by men, which resulted in the AI not recruiting women.
​
Currently, AIs do not have the ability not to make mistakes or to be fair in their choices, whose algorithms can be distorted if the system in which we live is not fair. This would mean that the AI would have to be superior to a human to know if these choices will be right and good for humanity.
​
In November 2021, UNESCO members draftedrecommendation on the ethics of artificial intelligence, in order to serve as a reference in the respect of human rights and human dignity in the digital world.
​
For UNESCO, AI has been of great help in the medical field and the search for a vaccine during Covid 19, however it has raised other problems related to access to data, tracing and has also created problems of gender inequality. There are currently only 22% women in the professional AI sector, which can cause gender biases and stereotypes. Like the fact that virtual assistants have female voices by default like Siri, Alexa or Cortana.
​
As for the problems related to global warming, AI could help us find solutions on energy management, the protection of our ecosystems... however AI itself generates a lot of CO2 so how can we reduce this energy consumption to make it cleaner.
​
There is still a great vacuum at the legislative level, it is important that regulations can be applied worldwide. But how to be sure that there will not be political abuses as it has already been the case via other technologies.
Regulation​
On May 16, 2023, Sam Altman, Founder of Open AI with ChatGPT4, was called in to help enforce global regulation, without hampering innovation. Since the advent of LLM - Large Language Model, these AIs are able to generate responses indistinguishable from a human, have generated potential risks such as discrimination, prejudice, misinformation, security and confidentiality.
​
Currently, the United States is attempting to manage AI regulation at the national level. As for Europe, in 2021 the European Commission proposed a first regulatory framework on AI.
​
The idea would be to analyze and classify AIs according to the risks that they could represent to users. For this, the AI systems authorized in Europe must be safe, transparent, traceable, non-discriminatory and respectful of the environment. AI should be supervised by humans and not be automated.
​
AIs that should be completely banned:
-
Cognitive-behavioral manipulation of specific vulnerable individuals or groups such as children
-
Social score by ranking people based on their behavior, socio-economic status and personal characteristics
-
Real-time and remote biometric identification systems such as facial recognition but could be used a posteriori in the case of a police investigation after the approval of justice
​
In order to respect all transparency, the content created by a generative AI must be notified, not create illegal content (child pornography, criminal, etc.) and only publish summaries of texts that are protected by copyright. Users must be informed when they are dealing with an AI and give the user the free choice to stop.
It is therefore important that companies creating AIs set up fair governance systems in order to create an AI ethic and avoid all the dangerous abuses that this could entail.
Dangers of AI​
When reality meets fiction, how not to be afraid of advances in AI, without thinking of all the science fiction films that have addressed this subject. One of the fears would be that the AI could be self-aware. This would pose a huge ethical problem even when it came to human cloning.
​
Like any scientific discovery, it is normal to be able to imagine all the possible solutions even when it comes to the worst. An AI with a conscience could question its inferior status in front of a human, could rebel or make its own choices contrary to the orders given. Consciousness will push to question the system and therefore could cause the end of humanity if the latter will judge itself to be superior to the human.
​
It is also obvious that machines have already replaced humans in industry, as has been the case in the automobile causing the loss of millions of jobs. The daunting tasks would be assigned to robots, leading to the dehumanization of jobs, especially when cashiers were replaced by automatic checkouts. This quest for automation, efficiency, time saving could make us lose our share of humanity.
There is also the risk of control and surveillance that states could put in place to monitor the population as China has already applied via surveillance cameras led by an AI with facial recognition.
​
All of these issues need to be addressed and investigated through government ethics committees to maintain control of AI so that human rights and life on earth are respected. This committee should be composed of scientists, jurists, politicians, sociologists, doctors and philosophers; a perfect alliance that will guarantee the protection of humanity against the excesses linked to scientific and technological advances.
​
What if one day reality meets fiction andASI - Artificial Superintelligence would be manufactured, it will certainly be necessary to impose on them the laws of robotics as written by Isaac Asimov, in the "Cycle of Robots" and "Foundation":​
​
-
Zero Law:A robot cannot harm humanity, nor, by its inaction, allow humanity to be exposed to danger;
-
First Law : A robot may not harm a human being, nor, remaining passive, allow a human being to be exposed to danger, except in contradiction with the Zero Law;
-
Second Law : A robot must obey orders given to it by a human unless such orders conflict with First Law or Zero Law;
-
Third Law : A robot must protect its existence as long as that protection does not conflict with the First or Second Law or the Zero Law.
​
​