Artificial Intelligence (A.I.) is more than just self-driving cars, super-intelligent stock calculators and military drones. It has already infiltrated our lives in many mundane ways. Artificial Intelligence is in your phone’s auto-correct, and can control your thermostat. The press always focuses on scary Science Fiction A.I., making it easy to forget that most of the artificial intelligence in the world today is not mystical, magical or terrifying.
Take for example the recent open letter that several A.I. researchers signed, addressing some of the social, moral, and political concerns that we face in a world with increasing artificial intelligence and automation. The way it was written, the letter was essentially a warning about the dangers of sociopathic super-computers that could some day enslave mankind!
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” warns Stephen Hawking, who has always been very good at drumming up media buzz. The language in the letter wanders between hyped-up warnings and abstract philosophizing, making its practical value very difficult to digest for the average reader.
But it doesn’t have to be that way. The point-by-point observations and recommendations in the letter are neither scary nor complicated. They are mostly just very sound advice, having more to do with how we should deal with a world of automation than with self-aware robot entities that may want to subjugate us.
To make it all easier to understand for the average reader, I’ve decided to take the bullet points in the Research priorities for robust and beneficial artificial intelligence open letter and re-write them to focus less on scary Sci Fi Artificial Intelligence, and more on an example of possible future A.I. that is more relatable and less threatening: robots that want to make you the perfect waffle for breakfast.
[Note: the numbering system here matches the numbering in the original letter. Section 1, the introduction, has been deliberately omitted.]
2. Short-term research priorities
2.1. What will the economic impact be of waffle-making robots?
- Labor market forecasting. Well, to start with, once you have your waffle-making robot, you will not go out to eat at Waffle House any more. What will happen to Waffle House? If everyone has a waffle-making robot, will Waffle House go bankrupt? Will human waffle-makers be able to learn new skills, or will they be left out in the cold?
- Other market disruptions. When every decision about waffle production is controlled by robots, this could have large-scale effects throughout the economy. If all the robots order their supplies at the same time, how will it affect supply and demand? Will the robots communicate and be aware of what the other waffle-making robots are ordering? Will the suppliers of waffle ingredients be able to use the A.I.’s data to create more precise forecasting models to inform their production? If the robots suddenly decide that chocolate chip waffles are undesirable, could this have unexpected consequences that ripple out to the entire chocolate chip industry?
- Policy for managing adverse effects. Imagine a world where suddenly all of the people who used to work at Waffle House are now unemployed. It’s difficult to argue that this should lead to suffering, right? When all of the waffles are being made by robots, people should be able to enjoy more leisure time, not forced into despair because of the lack of waffle-related jobs in the world. What policies can we put in place to allow for an economy with fewer total jobs available, but plenty of waffles?
2.2. What are the legal and ethical implications of A.I.?
- Liability and law. If you don’t like the waffles your robot makes you, who is responsible? The manufacturer? The programmer? Or the robot itself? Who is at fault if your burn yourself on a waffle that is too hot? Who is at fault if you ask your robot to make you chocolate chip waffles every morning and then you end up getting diabetes? Will there be any legal protection for the waffle robot manufacturers in these cases?
- Machine ethics. How should the robot weigh different factors when deciding what kind of waffles to make you? If it is forced to pit your taste preferences against your health, who determines the proper weighting or influence of each? If a person requests a cyanide-laced waffle, what should the robot be programmed to do?
- Autonomous weapons. OK so this one really doesn’t have anything to do with waffles, unless you are using waffles as weapons. But if you are doing that, then there are some serious problems going on with you that go far beyond issues of artificial intelligence. Stop using waffles as weapons.
- Privacy. Will robots be able to massively aggregate data about popular flavors of waffles, or the global responses to different types of waffles? If waffle-making robots can access this data, will there be any security in place to make sure that a malicious party will not be able to hack into your own private waffle preference data? How can robots take into account global trends in waffle popularity, without putting individual waffle data at risk for exposure?
- Professional ethics. Do we really want computer programmers making big-picture decisions about our waffle consumption? When designing a waffle-making robot, should it be up to computer programmers to make health, cost, and aesthetic judgments involving waffle choice? If not, then how will such decisions be made?
2.3. Computer science research for robust waffles
- Verification. A waffle-making robot is sure to be very complex. If I say “this waffle-making robot will come up with a seven-day waffle plan that will taste good, keep you interested, and give you a balanced variety of nutrition,” how can I be sure that it will succeed? What tests can I put in place to measure whether the robot is actually making you the best waffle breakfasts possible?
- Validity. Different people might have different ideas about what “best waffle breakfast” means. Does it mean waffles that taste the best, or waffles that will give you the most sustained energy throughout your day? Are the best waffles the most healthy waffles, or only the waffles that meet some minimum standard for healthiness? Does the temperature of the waffles matter? Are the variety of topping options part of the formula, or not? In short, how do we know that our program is taking all of the right factors into account?
- Security. How do we make sure that a malicious party, such as the Chocolate Chip Manufacturer’s Lobby, doesn’t secretly go in and change the programming so that suddenly the robots refuse to make anything but chocolate chip waffles?
- Control. Finally, because these robots will be very complicated, and will undoubtedly have mechanisms that allow them to change their parameters over time (e.g. learning taste and temperature preferences of their particular owner), how do we make sure we can fix them if something goes wrong? What if, in the future of science, it is discovered that waffles with whipped cream are actually incredibly healthy for you and we want to alter the programming of the robots to take this into account? Are there ways of making sure this is possible, even after all of these waffle-making robots have been released into the world?
So there you have it. None of it is particularly magical or mystifying. Most of the points in the document are really just “good programming practice”, with a few philosophical questions about economics thrown in for good measure. So the next time you are sitting back and worrying about the future of artificial intelligence: relax! And think about it in terms of waffles.