Experience-based AI legislation in Estonia
For years, Ott Velsberg, Estonian Government Chief Data Officer, has been intimately engaged with the AI legislation. Among other topics, he helped to develop Estonian own AI strategy both in 2019 and 2021, focusing on two principles: first, regulation should be developed based on experience with the use of AI in specific fields; and, second, instead of developing separate AI law, we will contribute to the overarching European legislation. Both of these principles allow him to offer comments and valuable insights that, he hopes, will improve the proposed legislation.
“We welcome the proposed AI directive because we have long supported the idea that European wide harmonised regulation ensures a common market, and minimising risks increases acceptance of AI technologies,” Velsberg says. “Our concern is that the definition of AI in this draft is too wide and the scope of its application occasionally too sweeping. And I can assure you that in this assessment, we rely on the long experience Estonia has with AI in the public sector.”
Velsberg brought an example of “Personal Data Usage Monitor”: an easily accessible and user-friendly way of providing an overview to Estonian citizens who and why has processed their data stored in government databases.
“One simply has to log onto our national portal, click on the menu item, and every data query becomes visible,” Velsberg explains. “We are currently working on expanding this to every instance where personal data is used in Estonia. I believe this could be a useful example for European legislators who are concerned about data use transparency.”
How to define AI? And what constitutes risky?
Doris Põld, CEO of the Estonian Association of Information Technology and Telecommunications, who contributed input for developing Estonia’s official position in June last year, agrees that it is reasonable to approach AI legislation based on risk assessment.
“The aim of the risk-based approach is to define stricter restrictions where the risk is unacceptable and to prohibit the use of artificial intelligence in certain cases,” Põld suggests. (“We only need to think that the alternative is to create uniform requirements for all AI regardless of its application,” Velsberg adds.) However, both Velsberg and Põld point to an unnecessarily broad definition of AI. According to Põld, in addition to complex and high-risk AI systems, there are plans to bring relatively simple automated IT systems and solutions under it.
Velsberg, who has done an initial assessment and concludes, that 40% of all AI solutions in the Estonian public sphere might fall into the “high risk” category while only some to minimal risk and almost none to unacceptable risk category.
“We suggest that the definition of AI should be narrower, for example, pertaining to machine learning in particular,” Velsberg says. “This would ensure that we are actually able to single out AI use from among other statistical or technological solutions.”
“Also, the list of areas where AI risk should be assessed is quite large, but our experience tells us that the actual application could be quite risk-free if the solutions are developed in a thought-out manner,” Velsberg adds. As an example, he points to the decision-assistant that helps to choose a career path for the applicant in the Estonian Unemployment Fund.
“In the current draft, employment and HR are in high-risk categories. But does the decision assistant really contain high risk? Our suggestion is to look closely at this proposed list and the real-world applications to establish a more reasonable match between the use-cases and their actual risk,” Velsberg concludes.
Concerns of added administrative burden
Because of the wide scope of proposed legislation, Põld is concerned about the increased administrative burden for many Estonian companies. While large global companies would have little problems in responding to the increased need for certification and transparency, the burden would be especially detrimental to small and medium-sized companies (SME) who provide a majority of Estonian solutions. For their development projects, added costs for risk assessment would cap innovation, and damage Europe’s competitiveness.
Because of the large scope, companies relying extensively on AI, such as Veriff, are looking for clarity in the rules of application for each player in the system. “This helps to understand everyone’s role and obligations,” comments Veriff’s founder and CEO, Kaarel Kotkas.
The administrative burden would also hit the public sector. Velsberg has estimated that the regulation in its initial form could all together add roughly 93 million euros of costs to the public sector services currently already developed and under development in Estonia.
“We agree with most principles that the AI regulation includes, but in some instances the requirements are unreasonable. For example, the directive demands that data sets should be “representative, free of errors and complete”. When we look at the most exact public data sets in Estonia, for example even census data or land cadastre, it is not possible to achieve 100% accuracy. There is little added value or minimisation of risk involved in achieving this level of accuracy,” Ott Velsberg says.
A pragmatic way forward
Linnar Viik, Estonian IT-visionary, one of the leading figures in the “Tiger Leap”, the state-wide digitalisation effort in the 1990s, agrees that the concerns that the proposed legislation addresses, are valid. But he also suggests that we should critically examine to what extent the current activities hold true to the underlying principles.
“The Estonian approach has always been pragmatic. As we have deployed AI in our public sphere for various aims, we have always considered how our current legislative framework supports these aims, and reacted retrospectively. In this regard the field of AI differs from the developments of, say, railroads and bridges. It is very turbulent, highly competitive, and there will always remain some risks and doubts,” Viik draws a parallel. “I think the lessons we have learned provide an example of how to address the risks, and at the same time allow us to strive in the global competition.”