BET
17009.55
0.48%
BET-TR
35292.77
0.48%
BET-FI
60374.41
-0.54%
BETPlus
2508.77
0.44%
BET-NG
1224.33
0.6%
BET-XT
1450.86
0.38%
BET-XT-TR
2976.57
0.38%
BET-BK
3110.91
0.22%
ROTX
37379.24
0.49%



Artificial Intelligence (AI): perspectives on the GOOD things but also on the HARM it can bring

Autor: Financial Market
Timp de citit: 5 minute

Artificial intelligence (AI) is a strategic technology that offers many benefits for citizens and the economy. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans, and in many other ways that we can only begin to imagine.

As digital technology becomes an ever more central part of every aspect of people’s lives, people should be able to trust it. Trustworthiness is also a prerequisite for its uptake. This is a chance to build safe, reliable and sophisticated products and services from aeronautics to energy, automotive and medical equipment.

Europe’s current and future sustainable economic growth and societal wellbeing increasingly draws on
value created by data. AI is one of the most important applications of the data economy. Today most
data are related to consumers and are stored and processed on central cloud-based infrastructure. By
contrast a large share of tomorrow’s far more abundant data will come from industry, business and the
public sector, and will be stored on a variety of systems, notably on computing devices working at the
edge of the network.

Simply put, AI is a collection of technologies that combine data, algorithms and computing power.
Advances in computing and the increasing availability of data are therefore key drivers of the current
upsurge of AI.

SEIZING THE OPPORTUNITIES AHEAD: THE NEXT DATA WAVE

Although Europe currently is in a weaker position in consumer applications and on online platforms,
which results in a competitive disadvantage in data access, major shifts in the value and re-use of data
across sectors are underway.

The volume of data produced in the world is growing rapidly, from 33 zettabytes in 2018 to an expected 175 zettabytes in 2025. Each new wave of data brings opportunities for Europe to position itself in the data-agile economy and to become a world leader in this area. Furthermore, the way in which data are stored and processed will change dramatically over the coming five years.

Today 80% of data processing and analysis that takes place in the cloud occurs in data centres and centralised computing facilities, and 20% in smart connected objects, such as cars, home appliances or manufacturing robots, and in computing facilities close to the user (“edge computing”). By 2025 these proportions are set to change markedly.

We’ve seen the benefits of AI – but what harm can it cause?

While AI can do much good, including by making products and processes safer, it can also do harm. This harm might be both material (safety and health of individuals, including loss of life, damage to property) and immaterial (loss of privacy, limitations to the right of freedom of expression, human dignity, discrimination for instance in access to employment), and can relate to a wide variety of risks.

A regulatory framework should concentrate on how to minimise the various risks of potential harm, in
particular the most significant ones. The main risks related to the use of AI concern the application of rules designed to protect fundamental rights (including personal data and privacy protection and non-discrimination), as well as safety and liability-related issues.

CITESTE SI:  Techcelerator - recunoscut de Financial Times ca fiind printre cele mai importante hub-uri pentru startup-uri din Europa și pe locul al treilea în Europa de Est

AI can perform many functions that previously could only be done by humans. As a result, citizens and legal entities will increasingly be subject to actions and decisions taken by or with the assistance of AI systems, which may sometimes be difficult to understand and to effectively challenge where necessary. Moreover, AI increases the possibilities to track and analyse the daily habits of people.

For example, there is a potential risk that AI may be used, in breach of EU data protection and other rules, by state authorities or other entities for mass surveillance and by employers to observe how their
employees behave.

By analysing large amounts of data and identifying links among them, AI may also be used to retrace and de-anonymise data about persons, creating new personal data protection risks even in respect to datasets that per se do not include personal data.

AI is also used by online intermediaries to prioritise information for their users and to perform content moderation. The processed data, the way applications are designed and the scope for human intervention can affect the rights to free expression, personal data protection, privacy, and political freedoms.

Bias and discrimination are inherent risks of any societal or economic activity. Human decisionmaking is not immune to mistakes and biases. However, the same bias when present in AI could have a much larger effect, affecting and discriminating many people without the social control mechanisms that govern human behaviour . This can also happen when the AI system ‘learns’ while in operation.

In such cases, where the outcome could not have been prevented or anticipated at the design phase, the
risks will not stem from a flaw in the original design of the system but rather from the practical impacts of the correlations or patterns that the system identifies in a large dataset.

The specific characteristics of many AI technologies, including opacity (‘black box-effect’), complexity, unpredictability and partially autonomous behaviour, may make it hard to verify compliance with, and may hamper the effective enforcement of, rules of existing EU law meant to protect fundamental rights.

Enforcement authorities and affected persons might lack the means to verify how a given decision made with the involvement of AI was taken and, therefore, whether the relevant rules were respected. Individuals and legal entities may face difficulties with effective access to justice in situations where such decisions may negatively affect them.

Risks for safety and the effective functioning of the liability regime AI technologies may present new safety risks for users when they are embedded in products and services. For example, as result of a flaw in the object recognition technology, an autonomous car can wrongly identify an object on the road and cause an accident involving injuries and material damage.

As with the risks to fundamental rights, these risks can be caused by flaws in the design of the AI technology, be related to problems with the availability and quality of data or to other problems stemming from machine learning. While some of these risks are not limited to products and services that rely on AI , the use of AI may increase or aggravate the risks.

What can be done in order to mitigate the risks?

A lack of clear safety provisions tackling these risks may, in addition to risks for the individuals
concerned, create legal uncertainty for businesses that are marketing their products involving AI in the EU. Market surveillance and enforcement authorities may find themselves in a situation where they are unclear as to whether they can intervene, because they may not be empowered to act and/or don’t have the appropriate technical capabilities for inspecting systems.

Legal uncertainty may therefore reduce overall levels of safety and undermine the competitiveness of European companies. If the safety risks materialise, the lack of clear requirements and the characteristics of AI technologies mentioned above make it difficult to trace back potentially problematic decisions made with the involvement of AI systems. This in turn may make it difficult for persons having suffered harm to obtain compensation under the current EU and national liability legislation.

When designing the future regulatory framework for AI, it will be necessary to decide on the types of
mandatory legal requirements to be imposed on the relevant actors. These requirements may be further
specified through standards.

This is why the European Commission launched a Consultation on Artificial Intelligence. Citizens and stakeholders and invited to provide their feedback by 14 June 2020.

The current public consultation comes along with the White Paper on Artificial Intelligence – A European Approach (available here.) aimed to foster a European ecosystem of excellence and trust in AI and a Report on the safety and liability aspects of AI. The White Paper proposes:

– Measures that will streamline research, foster collaboration between Member States and increase investment into AI development and deployment;
– Policy options for a future EU regulatory framework that would determine the types of legal requirements that would apply to relevant actors, with a particular focus on high-risk applications.

The consultation enables all European citizens, Member States and relevant stakeholders (including civil society, industry and academics) to provide their opinion on the White Paper and contribute to the European approach for AI.


[ajax_load_more]