Artificial Intelligence and Life in 2030: Stanford’s ongoing study

The University of Stanford assembled thinkers to measure the future ripples of AI – and their first full report has been issued.
AI news

One of California’s top academic institutes has gathered a collection of industrial and intellectual minds to flesh out the possible effects that artificial intelligence may hold for future societies. Using the year 2030 as a projected marker, the Stanford-hosted council contemplates both the macro-societal and more urban, ground-level outcomes that ‘smart’ technology may bring about.

The panel’s first official report is titled “Artificial Intelligence and Life in 2030”. It examines the real-world considerations of intelligent machines. It turns out to be quite far removed from the sci-fi intrigue and doomsday action sequences that Hollywood’s take on AI has led us to believe.

Since the subject of artificial intelligence has existed almost exclusively on the silver screen and in the pages of far-fetched novels, there is no immediate authority to consult on the matter of AI becoming a real part of human life. Thus, Stanford drafted a conference of diverse expert minds to investigate the subject in a theoretical sense. The international panel currently consists of 17 distinct industrial and academic thinkers from different personal and professional backgrounds.

The committee’s first meeting was held in 2015 and was spearheaded by Harvard computer scientist Barbara Grosz. During the first talks, she said: “AI technologies can be reliable and broadly beneficial. Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The Artificial Intelligence and Life in 2030 document forms part of a much larger initiative of Stanford University that is called AI100 (One Hundred Year Study on Artificial Intelligence) which is the product of Stanford alumni Eric and Mary Horvitz. AI100 charged a host of scientists to report on the global developments of artificial intelligence periodically during the course of the following century. The conference serves as a multi-generational registry of assessments regarding the impacts of AI.

Part of the assembly’s purpose is to fuel talks regarding the safety of artificial intelligence and the reasonable application of these swiftly rising technologies. The researchers agree that this discussion should not be confined to the intellectual elite, as they write in the report: “It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” referring also to the need for open, informal discourse in the public sphere.

Using the hypothetical model of a nondescript American city, the panel considers every civic platform that will be affected by the rise of independently evolving technology and software - from transportation infrastructure to service industries. The report unpacks their findings and describes forecasted ripple effects of AI technology based on present-day cases. The document also includes a comprehensive glossary to help layman readers to navigate through brainy AI concepts.

The report makes reference to eight specific categories of civic life and estimates the profound and pervasive change that AI will bring to each domain. These domains include five practical regions that AI will probably flourish in, such as transportation (and the potential that self-driving cars impose) for example. Emphasis is placed on the subject of employment too, with a section regarding the likelihood of specific industries to experience sharp changes in jobs and salaries.

Another theme that is focussed on is the technological impacts of AI at large - for instance, the application of complex language processing systems. Certain literature-based programmes are in development that will be able to perceive nuance, connotation and subtext in language rather than being confined to binary technical definitions. This could have a substantial impact on the way information is assimilated on the internet, for example.

The report also treats the subject of public policy regarding AI – how would artificial intelligence be regulated to enhance human life? The researchers understand the dangerous potential for insufficient policy to abate AI’s progress for humanity completely, as they write: “Misunderstandings about what AI is and is not could fuel opposition to technologies with the potential to benefit everyone. Poorly informed regulation that stifles innovation would be a tragic mistake.”

The panel's full report, Artificial Intelligence and Life in 2030, can be found on Stanford's website.