Chapter in this post:
Admittedly, it has been a while since Google launched its Research blog pointed out that its own artificial intelligence was used to create a new AI. However, one should not only think shortly after such reports, but always think about the opportunities and dangers that arise from this type of machine intelligence creation. After all, a lot of digitally savvy people already use assistants such as Apple Siri, Alexa from Amazon or Google Now. Deep learning plays a role in all of them. Then there are translators, image recognition software and, soon, self-driving cars.
Deep learning / machine learning can already be found in many applications at Google. From image recognition to speech recognition to machine translation of texts, there are many areas that benefit the end user in particular. In most cases, many experts come together to create a corresponding system in a time-consuming work. In order to accelerate the creation of a more or less talented artificial intelligence (AI) or one that specializes in a field, Google simply assigned the task to an AI itself.
The entire subject-specific embellished explanations on the implementation of artificial intelligence in the creation of neural network architectures for machine learning and processing of tasks can be found in the post on the Google blog linked above. Here in a nutshell: This created new structures that are not only different from those of previous networks, but also better. The research team is now investigating the chances of the new structures and the additional nodes in them, which have sometimes not been classified as useful.
At the moment, “child” models seem to be emerging for the time being, ie starting networks that have to be filled with a task or a subject-specific core. If these models are learned and the parent or control AI receives feedback, adjustments and improvements are carried out. This creates ever better, ever more intelligently networked starting models as well as better special intelligences.
Reading / listening tip: EXTINCTION: science fiction audio book with a strong reference to reality
The opportunities are clear, of course: the more efficient creation of software and technology that can take on tasks and fulfill orders. A number of products can be created for science and industry as well as for household use.
The dangers lie in ethics: an AI must theoretically be fed with ethical and moral principles. Because, from a purely logical point of view, an AI could, for example, decide to use all recordings from the webcam to optimize face and object recognition. From a purely moral and legal point of view, however, this does not go hand in hand with data protection.
Even self-driving cars may at some point be faced with the question: "Do I run over the person on the right in the lane or do I endanger the five people in the car to my left?“- Then who is to blame for an accident? And is the logical answer the correct answer?
Recommended reading: The best science fiction series as a DVD box
It is still science fiction that man and machine face each other and have to take decisive steps. However, it is quite possible that ethical and moral conflicts will bring the creation of law, jurisprudence and other areas of this kind together with artificial intelligence in the near future. Especially when new “brains” are reproduced faster and faster.
Effectively for free: iPhone 13 Mini and iPhone 13 deals with top conditions at Otelo - Advertisement
After graduating from high school, Johannes completed an apprenticeship as a business assistant specializing in foreign languages. But then he decided to research and write, which resulted in his independence. For several years he has been working for Sir Apfelot, among others. His articles include product introductions, news, manuals, video games, consoles, and more. He follows Apple keynotes live via stream.