Economy in the Era of Artificial Intelligence

Economy in the Era of Artificial Intelligence

These days, there is a tendency to understand artificial intelligence (AI) as machine learning, in which computers no longer comply with formalized rules made by people (for example, playing chess), but learn from examples - largescale arrays of information. Today, the human ability to generalize based on limited data appears to be a frontier of artificial intelligence, distinguishing it from the “natural” intelligence. For economists, machine learning means radical reducing of prediction costs, where prediction is understood to be the ability to generate new information on the basis of known information. In this context, prediction is a fundamental aspect of any decision-making process, and therefore, a radical reduction of its cost is key to economic life and is an important issue in economic policy. In the recent article NBER’s economists Ajay Agrawal, Joshua Gans and Avi Goldfarb discuss the most important aspects of economic policy for the development of AI.

 

Confidentiality under Threat

 

 

The first problem concerns the confidentiality of personal data that is the main “resource” of machine learning. Firstly, the cost of storing personal information is relatively low, and so user data can “outlive” a person, who generated it, for a long time. Secondly, the use of data may not meet or may defeat those purposes for which it was created. Thirdly, the data generated by one person can contain data about other people. That way, a need arises to govern the matters of confidentiality. Inadequate protection of personal data would result in people stopping using platforms where machine learning has been implemented. On another note, excessive regulation would not allow providers to earn money with user data analysis.

 

Existing research reveals that more regulation in the matters of confidentiality usually leads to slowing down the adaptation rate of new technologies and declining in innovative activity, other things being equal. For example, research of 2011 suggested that tighter regulation of online tracking technologies adopted by the countries of the European Union undermined the effectiveness of banner advertising by 65% compared to the United States, where laws are more lenient. Since the most expensive tech companies significantly depend on their advertising revenues (Facebook, Google, Baidu) today, stronger regulation in a particular jurisdiction may harm competitive position of local companies operating based on similar business models. In some areas, such as health care, patients’ lives and health may depend on the use of personal data. So, in certain scenarios the privacy right may be in conflict with other fundamental rights, exercising of which is based on the use of personal data.

 

Competition for Progress

 

 

It is obvious from the above example with the regulation of the confidentiality of information that the expansion of AI technologies significantly depends on the specificities of regulation in different countries. Because data has the effect of economies of scale and researchers can learn how to use machine learning through practical experimentation, information accessibility is a key factor determining the pace of AI technology development. In this situation, governments will be interested in minimizing the regulatory impact through being involved in “dealing for a fall”. For example, if the regulation of personal data in the EU countries is tighter than in the USA, and in the USA - tighter than in China, American companies will benefit relative to European ones, and Chinese companies - relative to American: they would have access to more information, and therefore, they would have more opportunities for developing the new AI technologies.

 

Such a “race” could be curbed by international treaties in the field of personal data regulation, but it is not entirely clear how such treaties can affect the pace of development and expansion of AI as a whole. Economists believe that the existence of tighter regulation will likely move the innovation activity to other areas. Another possible scenario is that countries will get involved in “dealing for a fall” if they can gain rent income through the development of AI. In other words, it makes sense to minimize regulation if a country has a chance to take a monopolistic position in a particular area of application of artificial intelligence for a while.

 

Who is Responsible for Robots and Algorithms?

 

 

 

Equally important for the development of AI is law of torts, which regulates cases when actions of one person cause harm to another person.

 

When it comes to AI, self-driving cars and algorithms are the most exemplary cases. A lot of companies are involved in the manufacture of self-driving vehicles: AI developers, telecom service providers, manufacturers of sensors, as well as manufacturers of vehicles themselves. In the absence of clear regulations governing the allocation of liability in the event of an accident occurring with a self-driving vehicle, the companies may withhold investments in AI developments. Manufacturers bear a significant part of the risks since a self-driving vehicle is not driven by a human. The same also applies when dealing with sophisticated medical equipment or algorithmic diagnosis.

 

The second example is biased algorithms. Agrawal, Gans and Goldfarb cite an example of a biased dynamic pricing algorithm that generated higher prices of advertisements targeted at women. As a result, the targeting of advertising of scientific and engineering education turned out to be biased in favour of men better represented in the above disciplines. In the United States, this kind of unintentional discrimination can be a cause for a legal lawsuit. In other fields of AI application, the “prejudice” of algorithms is fraught with more serious legal implications, e.g. in already mentioned medical diagnostics, credit record analysis or criminal profiling. As is the case with manufacturers of self-driving vehicles, the risk increases first of all for suppliers of AI: despite the fact that a decision maker may be no less biased, algorithms by definition are more “transparent” than the decision-making processes by real people. It is simpler to carry out a forensic audit of source code than a questioning of a person: no algorithm is capable to give false evidence, to enter into a conspiracy or to withhold evidence. Consequently, a plaintiff’s chances in the court are improved and this encourages the plaintiff to invest in legal prosecution.

 

Regulating the Future

 

 

As of today, consensus on the optimum economic policy for AI has not been reached, however, the most important challenges the future regulation will face may already be seen.

 

Firstly, economic properties of data make regulation essential to avoid market failure.

Secondly, some areas, such as health care or jurisprudence, require to work out a compromise between the right of privacy and social benefits, which include public health and fair trial.

Thirdly, the formulation of national economic policies for AI will require to elaborate international treaties governing the transfer of technologies in parallel.

Finally, a clear allocation of legal liability of all participants in a production chain of AI products is required, otherwise, the legal prosecution risk may significantly slow down any innovative development in this field.