+1443 776-2705 panelessays@gmail.com
  

Literature review for my topic progress of artificial intelligence and I want the article review for the articles . I attached the rubric. below

LITERATURE REVIEW DIAGRAM

NAME____________________________________

The first page = 20 points, other 10 pages = 8 points/each

Total: 20 + 10*8 = 100 points

TOPIC: ____________________________type it here________

TYPE YOUR RESEARCH QUESTION:

ISSUE 1 ISSUE 2 ISSUE 3

ISSUE 4 ISSUE 5 ISSUE 6

Name:________________________

1. Article Citation (APA format)

2. Topical Focus

3. Article Summary/Contribution to Field

FEATURED FORUM: WELCOME TO THE DIGITAL ERA: THE IMPACT OF AI ON BUSINESS AND

SOCIETY

Ethical Aspects of the Impact of AI: the Status of Humans in the Era
of Artificial Intelligence

Roman Rakowski1 & Petr Polak2 & Petra Kowalikova1

Accepted: 6 May 2021
# Springer Science+Business Media, LLC, part of Springer Nature 2021

Abstract
On the one hand, AI is a functional tool for emancipating people from routine work tasks, thus expanding the possibilities of their
self-realization and the utilization of individual interests and aspirations through more meaningful spending of time. On the other
hand, there are undisputable risks associated with excessive machine autonomy and limited human control, based on the
insufficient ability to monitor the performance of these systems and to prevent errors or damage (Floridi et al. Minds &
Machines 28, 689–707, 2018). In connection with the use of ethical principles in the research and development of artificial
intelligence, the question of the social control of science and technology opens out into an analysis of the opportunities and risks
that technological progress can mean for security, democracy, environmental sustainability, social ties and community life, value
systems, etc. For this reason, it is necessary to identify and analyse the aspects of artificial intelligence that could have the most
significant impact on society. The present text is focused on the application of artificial intelligence in the context of the market
and service sector, and the related process of exclusion of people from the development, production and distribution of goods and
services. Should the application of artificial intelligence be subject to value frameworks, or can the application of AI be
sufficiently regulated by the market on its own?

Keywords AI . Big data . Datafication . Commodification of data . Digital ideology . Ethical aspects

Introduction

We live in a period of digital turn, which is often referred to by
media, theorists and experts as the fourth industrial revolution
or Industry 4.0. The 4.0 concept was originally intended in
relation to the field of industry and production, in which there
will be such great changes that the whole social sphere will
subsequently change – as was the case during previous tech-
nological revolutions. The opposite is true; it is necessary to

talk more about the inconspicuous technological evolution
that is taking place at all levels of society, not just at the level
of the industry. The reach of modern technology has long
gone beyond research, development and manufacturing and
has completely dominated public and private life to the point
that 4.0 seems to be a society based on the interconnection of
technology, people and data (Big Data). However, this means
that new ethical and political challenges lie in the implemen-
tation of new technologies. On the one hand, technologies are
radically changing the environment in which we live, and on
the other hand, without us realizing it, they are also changing
ourselves. In the context of the “digital turn”, a transforma-
tion is currently affecting established modern oppositions
such as subject/object, public/private, consumption/produc-
tion, mind/body, work/leisure, culture/nature and so on
(Chandler & Fuchs, 2019, p. 2).

The initial enthusiasm for scientific discoveries and inno-
vations is seldom marked by fears of the unintended conse-
quences of their practical application. The obstacles consid-
ered include the restriction of the field of application by leg-
islative standards or the economic aspects of the transposition

* Petr Polak
[email protected]

Roman Rakowski
[email protected]

Petra Kowalikova
[email protected]

1 VSB – Technical University of Ostrava, Ostrava, Czech Republic
2 Faculty of Business and Economics, Mendel University in Brno,

Brno, Czech Republic

https://doi.org/10.1007/s12115-021-00586-8

/ Published online: 26 May 2021

Society (2021) 58:196–203

of new technology from laboratory conditions into production
practice. The existence of some difference between technolog-
ical possibilities and their implementation in an environment
limited by economic, legal, and organizational factors is wide-
ly accepted. However, a silent precondition for the introduc-
tion of technological innovations is their presumed benefit for
individuals, social groups, or society as a whole. Possible
negative consequences remain below the threshold of discrim-
ination, provided that they do not directly conflict with bind-
ing legislative or social standards in general and can be offset
by positive effects in the relevant area. However, the more
rapid the technological development and the more important
the social role that new technologies play, the more carefully
their impacts on an individual’s life and the functioning of
individual social subsystems should be considered
(Matochova et al., 2019, p. 229; Kowalikova et al., 2020,
pp. 631–636).

Responses to the dynamics of current change range from
attempts to stabilize the environment by introducing new con-
trol mechanisms and increasing the frequency of controls, to
the adoption of change and restructuring of hitherto known
interpretation schemes, to feelings of helplessness and alien-
ation (Veitas & Weinbaum, 2017, pp. 1–2).

The constant presentation of risks in public space and the
constant effort to reduce them significantly contribute to the
disruption of the feeling of ontological security. Compared to
previous stages of social development, advanced societies are
now more likely to die from overeating rather than famine,
suicide rather than an attack by soldiers, terrorists or
criminals, and of old age rather than an infectious dis-
ease (Harari, 2018, p. 397).

The American theorist and philosopher Fredric Jameson, in
his famous book Postmodernism or, The Cultural Logic of
Late Capitalism (Jameson, 1992), argues that new technolo-
gies help shape the subject itself under the weight of late
capitalism (which is denoted by the term postmodernism).
Literally Jameson, in line with the Kantian interpretation of
aesthetics, speaks of the technological sublime as something
we are not able to reflect on from our position and understand
at all (cognitive mapping). Although this thesis is particularly
concerned with the periodization of postmodernism, there is
another assumption in Jameson’s theory that is important for
understanding people in the world of new technologies (espe-
cially algorithms, AI, big data). This is a certain transforma-
tion of a social subject that adapts quickly to new “postmod-
ern” trends (change in the dynamics of the relationship be-
tween culture and economy, the emergence of new services,
the transition to digital capitalism). If we take this analogy out
of the context of the 90s and insert it into the present – the time
that shows signs of a technological turnaround – it can be
assumed that the syntax of the times is an algorithm applied
to big data, which is mediated by new technologies (which in
turn are the medium of new services and business models).

These algorithms then “help” us in orienting ourselves with
the inexhaustible amount of data that new technologies (in-
cluding information and communication technologies) pro-
duce ad infinitum (Ross, 2017). However, the design of these
algorithms and of artificial intelligence is not neutral and hides
certain pitfalls in the form of ideologies or biases that are not
easy to decode (Bowles, 2018).

Big Data is an integral backdrop of our lives. However, it is
useless to us if we cannot employ it in real time in the form of
personalization of various services. It decides what movies we
will watch, what music to listen to, where we go on a trip,
where we stay or whom we meet and whether we get a mort-
gage, whether a package from Amazon will arrive at our ad-
dress or whether our device’s camera gives us access to our
notebook based on our race (Bridle, pp. 142–143). Selected
camps of theorists in such cases do not waste a moment to use
the term technological determinism, which points to the au-
tonomy of technology. However, we will try to go beyond this
pessimistic approach in this study.

Adam Greenfield’s book Radical Technologies: The
Design of Everyday Life (Greenfield, 2017) offers an interest-
ing depiction in this pessimistic context. Let us imagine that
we are sitting in a café recommended by an algorithm; we pay
for the coffee in cryptocurrency via a smartphone, while chil-
dren across the street play AR games on smart devices. This
would not have been possible at all a few years ago, but today
it is understood as a common routine. The whole situation is
drawn up by technologies, however, not with one technology
but rather with a set of individual technologies and services.
At first glance, it may seem to us that these technologies are
too separate to be functional and create this situation.
However, their advantage is that they can be connected by
an “interface” of ones and zeros. This also multiplies the effi-
ciency of individual technologies. (Greenfield, 2017, pp. 498–
500). However, it is clear that this mediation between different
technologies – as we will show below – needs a clearer inter-
pretive framework.

If we take this technological allegory to the extreme, we
can say that technologies can to some extent constitute social
reality (remember the social bubbles on social networks, the
degradation of public space – new agoras). In updating
Jameson’s theory and with an inclination to the
(problematic) technological determinism, we could say that
the subject in the technological turn adapts to the syntax of
algorithms, artificial intelligence and Big Data. Social reality
can then be deprived of chance and subtlety to the extent that it
makes it seem that it can be transformed into the formal lan-
guage of ones and zeros. The aforementioned Bridle also
looks at this issue highly pessimistically: “In this way, com-
putation does not merely govern our actions in the present, but
constructs a future that best fits its parameters. That which is
possible becomes that which is computable. That which is
hard to quantify and difficult to model, that which has not

197Soc (2021) 58:196–203

been seen before or which does not map onto established
patterns, that which is uncertain or ambiguous, is excluded
from the field of possible futures” (Bridle, 2019, pp. 44).

The fact that adds to this pessimistic view of mankind is
that we ourselves have ceased to perceive algorithms and
new technologies as constructs of our everyday reality.
Bridle thus points to a problem that can be illustrated in
the philosophical direction of functionalism. Every day we
use the outputs of new technologies, but we have no idea
how they work and what algorithms are hidden in programs,
services and advertisements. Without understanding the con-
sequences, we use these technologies as black boxes, as
functions in which we enter and receive data (Bridle,
2019). This problem is illustrated by simply scrolling on
social networks: the posts we see are already preselected to
get our attention. If we had to see all the posts of all our
friends, for example, on Facebook, we could roll for hours
before seeing something that really interests us. We could
thus claim that the emancipation program of the
Enlightenment is unfinished in this case, because in the
technological turn, we unknowingly leave most of the deci-
sions to the algorithms and artificial intelligence. Algorithms
do not even have to work too much; social reality is simpli-
fied to the level of formal language. And that is the reason
why algorithms can have such an effect. It is therefore better
to not look for complexities in algorithms but rather at the
simplicity of social reality. Our social reality is complex and
diverse, but due to algorithms, it is no longer random. This
is the world of computational hegemony. However, the
question remains how to prevent this: responsibility and
rules (ethics) or awareness and education (breaking
ideology)?

In connection with the possibilities of using AI, Makridakis
(2017, pp. 8–11) presents four ways of interpreting the im-
pacts of this technology on the functioning of society.
Optimists predict the utilization of the speed and memory
capacity of computers and the ability to share their knowledge
with the human brain. Technological innovations will allow
genetics to intervene in the genetic code to prevent disease,
ageing or even death. Nanotechnology will make it possible to
create virtually any product at low cost, and robots that will
take over all human work will allow people to choose their
way of spending their free time and to choose work activities
according to their interests. Pragmatists rely on the ability to
control AI through effective regulation. Rather than on AI
which seeks to mimic human intelligence, they focus on
AI’s ability to expand human capabilities to increase room
for human decision-making and control. Doubters deny dys-
topian scenarios based on the threat of AI, pointing out that
human intelligence cannot be replicated and captured in a set
of formal rules. And if so, even then it will not be possible to
machine-replace human creativity, which is based on
overstepping rules – on antialgorithmic behaviour. It is

creativity, which is based on the violation of established
norms and ways of thinking, that also other authors (e.g.
Jankel, 2015) consider to be an ability non-replicable by
computers.

Harari (2018) warns against the division of intelligence and
consciousness and draws attention to the potential danger of
using unconscious but highly intelligent algorithms. If we
accept the assumption that organisms are algorithms and life
is data processing, then humans cannot compete with a ma-
chine that is able to make decisions based on the evaluation of
all available information and process a problem situation with-
out consciousness – or precisely because of the lack of it –
with a better result. A simple example is the comparison of
accidents between autonomous vehicles and people-driven
vehicles. The mass expansion of these types of vehicles would
result in a significant increase in unemployment among pro-
fessional drivers. Which would, in a sense, confirm the supe-
riority of the machine over man (Makridakis, 2017, p. 10).
After all, dystopian visions assume that sooner or later, orig-
inally human-made decisions will be dominated by the ever
more perfect machines, with better results than people prone
to errors would be able to achieve. This would necessarily
change the whole system of social stratification. Exclusion
or reduction of the role of a person in key decision-making
processes connected with the functioning of society would
then necessarily lead to their inferior social status.

Impacts of AI Use on Business and the Labour
Market

The determining influence of digitalization and automation on
the functioning of society at the level of all social subsystems
is indisputable. For the destructive and creative impacts of
digitalization and informatization on the labour market (crea-
tion, extinction and transformation of professions or jobs),
there are also proposals for political, economic and social
measures (increase of the minimum wage, introduction of un-
conditional basic income, support for the elderly and the low-
skilled, etc.). Changes in the structure of the labour market
must be accompanied by radical structural changes in society
and in the way people think about work.

What has become the subject of analyses are the social
consequences of technological innovations, issues of social
control of science and technology with a special focus on
opportunities and risks that technological progress can mean
for social ties, political life, value systems, etc.

The positive social potential of artificial intelligence can
manifest itself at the level of supporting and securing the func-
tioning of key subsystems of society without unspoken ste-
reotypes, prejudices and hidden discriminatory behaviour.
Furthermore, it may be reflected in changes in the structure
of work-related and non-work time due to the reduction of

198 Soc (2021) 58:196–203

activities performed by people and thus in the expansion of
space for social activities (development of social relations,
community life, volunteering, etc.). At the same time, possible
negative impacts of the development, implementation and ex-
pansion of AI at the economic, political and social levels are
considered (insufficient sociopolitical reflection on changes in
the structure of the labour market, misuse of AI by nondem-
ocratic regimes, limited possibilities of AI control, etc.). In this
context, questions arise as to who and how should be involved
in the decision-making in the development and implementa-
tion of innovation; on the basis of which criteria states should
set priorities for R&D funding; how companies should mea-
sure risks and set safety standards; whether and how experts
are obliged to communicate to the public their decisions and
their reasoning; etc. (Matochova et al., 2019, pp. 230–231).

Floridi et al. (2018, p. 690–694) emphasize the possible use
of artificial intelligence technology to support human nature
and its possibilities. On the one hand, he considers AI to be a
tool for expanding the possibilities of individual self-
realization and the utilization of interests, abilities, skills and
aspirations. Mastering routine tasks through AI opens space
for more meaningful ways of spending time. On the one hand,
Floridi points out the positive use of advanced intelligence in
human decision-making and action. On the other hand, he
draws attention to the necessary responsibility in the develop-
ment and distribution of state-of-the-art technologies, which
should remain under human control and benefit all members
of society as fairly as possible. AI technology enables more
efficient functioning of society and social systems, from the
prevention and treatment of diseases to the optimization of
transport and logistics to a more efficient redistribution of
resources or a more sustainable approach to consumption.
However, the power of technology also brings the risks of
its use. According to Floridi, these are mainly associated with
excessive autonomy of machines and limited human control,
based on insufficient ability to monitor the performance of
these systems and prevent errors or damage. A balance needs
to be struck between the ambitious projects and opportunities
that AI offers to improve human life and the strength of the
control mechanisms that people and societies set up.

Hawksworth et al. (2018, pp. 1–17) in their report identify
three phases of AI involvement in the functioning of various
areas of society but especially with regard to the shape of the
labour market. Until the early 2020s, they expect an algorith-
mic wave that is reflected in the automation of simple com-
putational tasks and the analysis of structured data. For this
reason, they consider the sectors based on routine data pro-
cessing, i.e. finances and insurance, but also the area of infor-
mation processing and communication, as the most accessible
to automation – and most risky in terms of maintaining the
number of jobs. The second half of the 2020s will be hit by a
wave of augmentation based on dynamic interaction with
technology in administrative support and decision-making

and on the automation of repetitive tasks, including the anal-
ysis of unstructured data in partially controlled environments.
The sectors concerned will be public administration and self-
government, production, warehousing and transportation. In
the 2030s, the autonomous wave should reach its peak, which
presupposes full automation of physical labour, machines
with manual dexterity and problem-solving skills in dynamic
situations and in the real-world environment, where an imme-
diate response is required. This phase of the use of state-of-
the-art technology will affect the construction sector, water
management, wastewater treatment and waste management,
etc.

In their analysis, Hawksworth et al. (2018, p. 2) assume
that in the short term, the most vulnerable jobs will be in the
financial sector and insurance and jobs held more frequently
by women. From a long-term perspective, the vulnerable
group is represented by employees in the transportation sector,
rather than men and people with lower qualifications (which
confirms the importance of investing in lifelong learning or
retraining). The same authors identify risk areas in terms of the
negative impact of the automation process by country, indus-
try and type of worker. The share of jobs at risk reflects the
country’s average level of education. It thus ranges from 20 to
25% of positions in some East Asian and Nordic economies
with a high level of education of the population to 40% of
positions in Eastern European economies, based mainly on
industrial production. Among these extremes are economies
dependent primarily on services, but with a significant propor-
tion of low-skilled workers (UK, USA). Within 10 years, after
the widespread use of autonomous vehicles, transportation
will be one of the most vulnerable sectors in terms of main-
taining the structure of jobs. Currently, the riskiest sectors are
those dependent on routine processing of structured data such
as the financial sector and insurance. At the same time, the
least vulnerable groups include workers with a university de-
gree who, in addition to their expertise, also show a higher
degree of adaptability to technological change. Such qualified
employees are also more likely to hold higher management
positions, where a lower level of susceptibility to automation
is expected. Like actors with lower education, older workers
may have a lower degree of adaptability. In the case of manual
work and positions in the transportation sector, where men are
more frequently represented, a higher degree of threat to the
stability of positions can be assumed again through the pro-
cess of automation. However, the same is true for women in
administrative positions.

Ethics of AI vs. Ideology of AI

The book Future Ethics, together with the theory of Andrew
Feinberg, points out that technologies are not inherently neu-
tral (Bowles, 2018, pp. 2), and an ideology is encoded in their

199Soc (2021) 58:196–203

very design to distort our usage: if technology forces us to pay
attention to it, that was the intention.

Design is applied ethics. Sometimes this connection is ob-
vious: if you design razor wire, you are saying that anyone who
tries to contravene someone else’s right to private property
should be injured. But whatever the medium or material, every
act of design is a statement about the future. Design changes
how we see the world and how we can act within it; design
turns beliefs about how we should live into objects and envi-
ronments people will use and inhabit. (Bowles, 2018, pp. 4).

However, the problem that Bowles outlines here can be
included in the normative level, where he works with three
levels of ethics (deontological ethics, ethics of virtue, utilitar-
ianism) and leaves it to the designer (of AI and algorithms) to
decide ethically – whether the consumer succumbs to the ide-
ology of design is purely up to them, the problem of ideology
is transferred to become their responsibility. In essence, this is
a naive normative guide for the digital capitalism industry.
The problem, however, is that Bowles excludes those that
are most affected – the technology company/technology users
and data producers – from decision-making. In this context, it
is clear that the political theory of technology needs to be
thought about rather than ethics. The political theory of tech-
nology offers an opportunity to change it – the democratiza-
tion of technology, that is, how to intervene in its design –
retroactively through society. We can see this possibility on
two levels: (1) deideologization of technology – one must
realize that we can actually influence technology by our deci-
sions (Allmer, 2017); (2) democratization of technology –
through a clear disagreement or detournement of technology,
we can achieve a change in the goal of technology (Feendber,
2009).

On one side, there is a responsible designer, on the other a
conscious society. Should the designer succumb to the values
and ethics of the company, there is a society that is being used
by the company. It is therefore clear that the requirement of
ethics alone is inefficient; the competitive environment itself
would have to change.

Ethics and Political Philosophy of AI

If we want to talk about political philosophy and the ethics of
artificial intelligence, we should distinguish between political
philosophy and ethics – albeit inextricably linked – in relation
to new technologies. If we look at the ethics of AI, the most
common approaches that appear in the context of the algo-
rithm are deontological ethics, ethics of virtue, and utilitarian-
ism. Rather, we are talking about the individual level, where
the design itself is produced, which is supposed to have a
certain impact on the individual and society. However, if we
look at the political philosophy of new technologies (algo-
rithms and artificial intelligence), we should inquire more

broadly into whether the new technologies concern society
as a whole. Here we then should distinguish between the (A)
critical and the (B) liberal branch of the political philosophy of
technology. (A) The critical theory of technology looks at the
power and ideological relations of technology – as in material
capitalism, new technologies are considered only as means of
production. Here, the specific contradictions that lead to the
non-transparent design of AI (how one is deliberately manip-
ulated by data in favour of digital capitalism) should be theo-
rized. (B) We should ask how to set the rules, on the one hand,
so that technologies are not too limited/regulated (the issue of
freedom) and, on the other hand, so that these technologies are
created for the benefit of society and the problems of the
current environmental crisis. These are purely political issues
of technology.

Critical Theory of Big Data

In his theory, Allmer (2017) works with tools that allow
shared data to be critically examined from the perspective of
economic-power relations. Although data seems to be handled
by users, the data is actually owned by large companies, which
ultimately decide how to handle it. However, such a fact is
worrying, and it is necessary to examine the extent to which it
affects the user (i.e. the social entity).

The main premise is that capital is accumulated through
user data, making this digital environment (such as social
media) an arena of struggle in which (as in any production
mode) class and social contradictions arise (Allmer, 2017, pp.
5). The fact that this principle of capital accumulation has been
transferred from the material environment of commodities to
the digital world is part of the evolution of commodification.
Commodifying public goods (such as data) has a number of
complications: digital reproduction emphasizes the privatiza-
tion of data. For this reason, it is necessary to create new forms
of capital, and it is best to involve the very user, who is con-
stantly producing data, in this digital production. If we stick to
the vocabulary of critical theory, this phenomenon can be
labelled by the terms of digital alienation and digital
exploitation.

For this analysis, Allmer uses Marx’s reasoning, which he
places in the current context. According to such an interpretive
framework, in a capitalist society, the asymmetry of the rela-
tionship of power is embodied in the very design of technol-
ogy (Allmer, 2017, pp. 16–26). Technology is understood as a
reflection of social relations, and for this reason, as we have
seen above, it cannot be understood as neutral. The goals of
technology thus correspond to the goals of capital itself (Ibid.).
Thus, the technology cannot be designed outside of a social
context. The illustration of these theses can be depicted as the
birth of a new rationality, which comes with technology at the
time of industrialisation and is the essence of mass production

200 Soc (2021) 58:196–203

and the transformation of the whole so-called base
(Horkheimer & Adorno, 2007).

The problem with rationalization is that if technology can
be taken out of context (e.g., historical expropriation), the
essence of rationalization will still remain in it – for example,
the question of automation does not lead to human emancipa-
tion (as was originally the idea of Herbert Marcuse). The
problem is how to work with the potential of new technolo-
gies. How do we even discover the emancipation potential of
new technologies? Are technological or political changes
needed for this emancipation? In this case, we will be helped
by the critical theory of technology (dialectics of technology
and society), which points to the socially conditioned con-
struction of technology and the impact of technology on soci-
ety (Allmer, 2017, pp. 42).

Democratizacion of New Technologies
and Big Data

Following the example of Feendber, 2009), one can distinguish
two main currents in the theory of technology. The first is the
so-called instrumental, which speaks of technology as the inter-
connection of technology with the value context of society
(culture or politics). Technical tools are understood as neutral
means serving only social goals. Technology is just a tool to
achieve efficiency. Such an approach is purely functional.
Technology is designed outside of political ideology. The sec-
ond stream, the substantive one, attributes autonomous force to
technology that prevails over traditional and competitive
values. It, therefore, denies the neutrality of technology and
emphasizes the negative consequences of technology for hu-
manity and nature. Technology has become part of the lifestyle
and everyday life. Technology has dominance over us and there
is no escape from it. The opposite is a return to the traditional
values of romantic simplicity (a certain apocalyptic vision).

If we think about the ethics of AI – that is, that the designer
can consciously modify the ideology of an algorithm and AI –
we should also think about the defence options of society,
which will be affected by AI. As we saw earlier, Feenberg’s
theory of democratization of technology could help us with this.

Feenberg’s theory represents a non-deterministic approach
to technology. Technology must not be considered as a set of
devices or the