Have You Ever Heard of Responsible Artificial Intelligence?

Have You Ever Heard of Responsible Artificial Intelligence?
To cut it short.
Be responsible with us. Try Storykube šŸš€

What does responsible AI mean?

Nowadays there is a growing hype around the world of artificial intelligence and its application in daily activities and aspects that affect our lives. People often talk about AI and its ethical implications, without even knowing what they are talking about. Perhaps they are scared, ā€˜cause the word artificial, which is the opposite of something human, is juxtaposed with the word intelligence, which is a purely and exclusively human thing. But let’s see what the big dictionaries of the English language have to say.

These are the definitions from the Merriam-Webster dictionary: ā€˜a branch of computer science dealing with the simulation of intelligent behavior in computers’, which mostly addresses the study of AI not AI itself, and then we have ā€˜the capability of a machine to imitate intelligent human behavior’, which, I admit, is kind of a scary thought. The Collins dictionary makes it softer, I think: ā€˜Artificial intelligence is a type of computer technology which is concerned with making machines work in an intelligent way, similar to the way that the human mind works’. I like it, because it says work in an intelligent way, similar to not it is intelligent as humans are.

So, most of the time, people discuss the ethics and trustworthiness of AI with skepticism and fear, particularly because they do not have a clear view of how companies that develop, implement and use AI actually apply it. This is why companies and businesses working with AI have started to deal with and talk about responsible AI (RAI). What is it? The word itself says it: responsible AI is basically the practice of developing and deploying AI with the aim of empowering humans and businesses, fairly impacting the planet and society, with good and clear intentions, instilling trust and confidence in people. Lots of big words. But believe it or not, this is what big tech companies are actually doing, even if you don’t see it.

The basic pillars on which responsible AI stands concern 4 aspects. Governability: all members involved in the development of a product or service using artificial intelligence algorithms must be clear about the values of the company or organization in which they operate and the expectations of the customers and users of those products and services. Design: this development of products and services using AI must begin with the goal of not betraying the trust of the end user. Privacy, transparency and security of input data must be ensured at every stage of development. Monitoring: once on the market, these products and services will remain under close scrutiny by specialized pros (humans), to ensure fair results over time. Education: investing in people’s education and training about how artificial intelligence works, even to the unfamiliar. In this way, users will be able to understand better what adopting or not adopting certain artificial intelligence algorithms entails.

A small blackboard with the sentence "Follow the rules" written with chalk.

What big tech companies think about responsible AI

From these pillars, big tech companies started to build their own responsible AI principles, promoting and spreading the practice of RAI among companies and corporations around the world. Google was the first to provide a RAI toolbox for Tensorflow, the proprietary machine learning algorithm development library. That toolbox includes guidelines, tutorials and APIs to enable developers to create services and products that are truly on the consumer's side.

According to Google, AI should:

  • be beneficial to society. AI can have an impact on healthcare, security, energy, transportation, manufacturing, and entertainment, hence humans need to take into account cultural, social, and legal norms and economic and environmental factors
  • be safely built, in order not to run into risk or harm for people, and be accountable as well
  • avoid unfair biases, keeping in mind that we live in a very diverse world, with different cultures and societies, all with their own sensitivities
  • have privacy principles, be transparent about data storage and use
  • uphold high-quality standards, being a scientific excellence and having to share their knowledge with educational intents
  • be made available only according to few principles, namely purpose, use, nature, scale of AI technology and Google’s involvement.

Microsoft is not to be outdone either. As a matter of fact, they have established ORA (Office of Responsible AI) and the Aether (AI, Ethics, and Effects in Engineering and Research Committee) whose purpose is to advise CEOs of companies around the world on the opportunities but also on the social, economic, cultural problems and challenges of artificial intelligence and how to address them. The principles of Bill Gates' offspring are:

  • fairness, meaning that AI systems should treat all people fairly
  • reliability and safety, meaning that these systems should perform in a reliable and safe way
  • privacy and security, meaning that they should be secure and respect people’s privacy
  • inclusiveness, meaning that AI systems should empower and engage everyone, with no distinction
  • transparency, meaning understandable
  • accountability, meaning that people should be accountable for AI systems.
A laptop keyboard, zoomed on a key with the image of two hands shaking instead of the letter.

The use of artificial intelligence systems is becoming increasingly common, with many businesses using them to automate processes and make decisions. However, AI systems are required to comply with privacy laws and regulatory bodies that govern data collection, processing, and storage and ensure the protection of personal information. So AI is a powerful tool that can be used to make our lives better, but it also has the potential to cause problems if not properly regulated. That is why the major tech giants, like Google and Microsoft, have called for AI to be regulated and built their own governance frameworks and guidelines.

Google and Microsoft are among other big companies and organizations which are part of the Partnership on AI to Benefit People and Society. This is a nonprofit coalition committed to the responsible use of artificial intelligence. It researches best practices for artificial intelligence systems with the aim of educating the public about artificial intelligence. They believe that through education, research, information, development and tool sharing, AI can help humans make the world more livable at any level, from the economy to the environment, from health to technology, from food to education.

Thus the ethical questions people raise about AI are complex and require the industry and interested parties to examine major social issues and ultimately the question of what makes us human. As has always been the case in human history, whenever there has been progress and change, whenever it seemed that humanity was losing touch with its human part, it has always depended and continues to depend on the use we make of technologies and discoveries.

We don’t have to forget that, although AI can act on its own, it was designed by us. Being AI designed to empower humans and to improve their lives, there is no reason why humans in the first place should develop and use AI to harm themselves. It’s counterproductive, contradictory and unreasonable.

A storm of flying robots from the The Mitchells vs. the Machines movie. One robot lands on the ground.

We could all agree that there is no reason to fear or question ethical matters concerning AI, if the purpose for which we built it and the use we make of it are made with responsibility, with cognizance and with…well, intelligence. Human intelligence. Danger is not AI taking over the world, but misuse and failures. Anyway, no takeover of power by AI is on the horizon. No one would ever want to take such a responsibility, so why take the risk? Better be responsible.


Resources:

Google AI (2022). Google AI. [Accessed: 9 September 2022]. Available at https://ai.google/responsibilities/

Microsoft (2022). ā€œPutting Principles into Practice [online]. Microsoft. [Accessed: 9 September 2022]. Available at https://www.microsoft.com/en-us/ai/our-approach?activetab=pivot1%3aprimaryr5

Partnership on AI (2022). Partnership on AI. [Accessed: 9 September 2022]. Available at https://partnershiponai.org/

Responsible Artificial Intelligence Institute (2022).RAI. [Accessed: 9 September 2022]. Available at https://www.responsible.ai/