Polski English

In the final, tenth, episode of our “Human 2040” series, entitled “I Communicate”, we look at how we will communicate with each other. Will we still need a smartphone to communicate with a voice assistant in 2040? How will technology help read our emotions and anticipate our needs? And will we be able to distinguish information from fake news on our own in the era of dynamic development of artificial intelligence? In the latest part of the series, Polityka Insight analysts will take a closer look at trends such as the development of communication without active human involvement or the growing role of intelligent sensors. In the podcast dedicated to communication, hosted by Andrzej Bobiński, managing director of Polityka Insight, the guest is Mariusz Chochołek, president of T-Systems Polska, who is also responsible for the largest business client market in T-Mobile Polska.



CIRI INTENDS TO LAUNCH A STATE-OF-THE-ART VIRTUAL ASSISTANT THAT WILL BE DIRECTLY CONNECTED TO THE USER'S BRAIN

The company's decision sparked tremendous enthusiasm on the New York Stock Exchange.

Ciri — the voice assistant has been accompanying us in meetings for many years and is doing more and more extensive notes, including writing summaries, completing calendars, making appointments or buying the products we need. Since the beginning of the 30s, the app, previously used mainly for business, has been incorporated as a permanent feature of the operating system of most mobile devices and has become one of the most widely used digital tools of the last decade. 

Yesterday the app informed users about the plan to implement a direct connection to the user's brain. Ciri will randomly select a representative group of 10,000 people who will be offered participation in the test phase of the project. If the pilotage is successful, functionality will be made available in the short term to all recipients as part of the upcoming major annual system update. The New York Stock Exchange reacted very enthusiastically to this unexpected information. The share price of Ciri increased by 121% and the run on the shares resulted in an increase of AI500 (+63%), Cyber100 (+24%) and the entire stock exchange of almost 5%.

The ‘Thought is the new voice’ project is likely to represent a breakthrough in the development of a new generation of virtual assistants. For years, many companies have worked on an interface that enables people's wishes and needs to be exploited. However, most of the attempts made so far have been broken down into so-called ‘thought interferences.’ Devices collecting plans and orders were not able to suppress subconsciousness and created an uncontrolled and endless inventory of commands and activities. The fact that Ciri, seen as one of the most traditional and conservative technology companies, decided to publish plans for the implementation of a direct interface shows that a technological breakthrough had to take place. What exactly was this breakthrough? It is unknown because the details of the pilotage are kept strictly secret by the company. We will learn more on 1 January when the selected users will start the tests.

The plan for the roll out of the direct interface was also enthusiastically received by the cybersecurity industry. Experts comment that the decision to prepare for the commercialization of the most recent generation of Ciri must mark a breakthrough in its security features. Ciri has had an issue for many years involving endless hacker attacks. It is perceived as a peculiar Holy Grail for the majority of cybercriminals. The upcoming update must mean, according to CyberUSofA, a technological breakthrough also in the field of cybersecurity. The company would not decide to develop such sensitive technology without 100% confidence that it is able to protect its product from external interference.

AI-EMOTION WILL OBTAIN THE APPROVAL OF POLISH SOCIETY FOR DIGITAL AND MENTAL HEALTH

A system that assesses our interactions with our environment and tailors the content we receive to our needs can be used in the treatment of loneliness and depression.

AI-Emotion is a system of sensors used to track emotions. Users choose which of their own parameters they wish to monitor: biological (e.g. pressure measurement), physical (e.g. weight), environmental (e.g. air temperature), behavioural (e.g. sleep length), but mainly mixed parameters (e.g. psychophysical). The sensors pick up a full record of the external world (experiences) of an individual. The sensations of all senses and all reactions to external and internal stimuli are recorded, so we know exactly how we feel and why. The use of these technologies in business is obvious: companies can see how their products are perceived in the real world. The system is used to supply algorithms that help us to choose what we desire, what we want to eat, view, listen to or by doing which physical activity we will feel better.

Despite the initial controversy, the ‘quantified self’ movement has won over a host of enthusiasts and has entered our lives permanently. Simple algorithms from the 20s reproduced our choices and urged us to make them deeper. With technological progress, the programs have started to diversify our choices by looking for alternatives on the basis of our other preferences and increasing datasets. However, building a business range based on digital footprint raised objections and concerns about breaching the existing anonymization guidelines. 

The AI-Emotion era has begun. This system has enabled us to better follow our emotions and reactions. And due to anonymized data sets, MDPs (mega data packs) and IUDTs (individual unplugged data trusts) have made it possible to create secure offerings that actually exceed our needs and expectations.

Quantified selfers have long been sought in other areas of life. Tests were launched to combine experience data based on emotional states with mental health. Today, it seems that these applications may be numerous. AI-Emotion will facilitate the work of therapists who are able not only to talk about recollections and opinions, but also to rely on data analysis and discuss recorded key recollections. The use of VR tools to anticipate the follow-up of responses in different situations is being considered. The initial resistance of Polish society to Digital and Mental Health has been broken and work is about to start on new applications and scripts to be used by psychotherapists in their daily work with patients.

THE EUROPEAN COMMISSION UNVEILS PLAN FOR DEEP FAKE DIRECTIVE

However, success has been moderate. As part of the implementation of the Directive, Member States will decide on the obligations of platforms to combat deep fakes.

The Commission and WoSoM.org (World Association of Social Platforms) have come to an agreement in which Brussels will prepare a directive governing responsibility for deep fake hosting. Platforms will need to inform all users (logging on from EU servers) whether a post has been ‘deep faked.’ This is a considerable success for DG DigCom since it has been ineffectively trying for many years to identify the platforms as publishers and not just as hosts of the content published. It was also possible to rebut the second line of defence of platforms, which claimed they were unable to take responsibility for determining whether videos posted had been manipulated.

However, the success of the Commission is only partial. It has not been possible to establish a single protocol of the so-called ‘takedown rights and obligations’ for the EU as a whole. As part of the implementation of EU rules, each country will be able to establish procedures for handling material identified as deep fakes. Individual countries are in a weaker negotiating position than the Community and, with small exceptions, will rather be forced to ignore the guidelines of the EC, which calls for automatic blocking of content. According to experts, only Sweden, France and Scotland will decide to do so.

Poland is among the group of countries likely to temporarily block ‘content of uncertain origin’ — users will have to prove that a video may be socially harmful to block it. These countries are also unlikely to choose to block content from so-called verified publishers. In turn, countries such as the Netherlands and Ireland will confine themselves to informing users that the content is deep fake. These more digitally liberal nations are afraid of lawsuits for taking down content that may be considered as artistic or political expression. Blocking them could then be seen as an infringement of the principles of freedom of expression.

Even now, artificial intelligence is able to write journalistic and advertising texts, often not distinguishable from human ones. OpenAI’s GTP-3 achieves the best results. With the same technology, AI assistants in the future will be able to respond to us successfully by embodying our way of expression and anticipating our thoughts.

Read more:
Floridi, L., Chiriatti, M. (2020) GPT-3: Its Nature, Scope, Limits, and Consequences.
Sterling, B. (2020) Web Semantics: Microsoft Project Turing introduces Turing Natural Language Generation (T-NLG)

A brain-computer interface (BCI) combination enables the brain activity to be directly translated into instructions for an external device. BCIs already allow for the control of robotic prostheses and the identification of basic emotional states, and in the future, they will also enable brain-brain communication. Stephen Hawking spoke and gave lectures at the college using BCI.

Read more:
Vidal, J. J. (1973) Toward direct brain-computer communication.
Moses, D. A. et al. (2019) Real-time decoding of question-and-answer speech dialogue using human cortical activity.
Newton, C. (2019) Brain-computer interfaces are developing faster than the policy debate around them

Emotional artificial intelligence is a field of research that measures, understands, simulates and responds to human emotions. Artificial intelligence will help to understand emotions on the basis of facial expression, attitudes and gestures, physiological indicators and words spoken. Understanding human emotions will allow for reliable human-computer interaction and for a more accurate calibration of the content displayed to users, which already applies to, among others, the creation of advertisements.

Read more:
Gujral, R. (2019) The Future Of AI: How Emotion AI Is Making Robots Smarter
Somers, M. (2019) Emotion AI, explained

Quantified Self is an international community of people who use technology to monitor themselves. Their aim is to gain more knowledge of each other. They most often use smart bands that measure heart rate, steps and sleep. In the future, data from new sources will be analysed, e.g. an audiovisual record of the entire day of a given person combined with a record of all their emotional reactions.

Read more:
Villarroel, M., Frigo, A. (2017) Self trackers: Eight Personal Tales of Journeys in Life-logging.
Wang, J. et al. (2017) Quantified Baby: Parenting and the Use of a Baby Wearable in the Wild

The development of artificial intelligence facilitates audio and video manipulation in a manner that is difficult to detect by humans and algorithms. The term ‘deepfake’ term was created by a user of Reddit who in 2017 created a group (inactive since 2018) dedicated to software that created pornographic content with the use of the faces of celebrities.

Read more:
Deeptrace (2019) The State of Deepfakes. Landscape, threats and impact.
Sohrawardi, S. J. et al. (2020) DeFaking Deepfakes: Understanding Journalists’ Needs for Deepfake Detection.

Takedown rights, i.e. the right to temporarily remove content concerning oneself from the internet. As the amount of problematic material increases, users will gain greater control over their content. A technological solution is also possible: all faces in social media visual materials will be blurred until a person presented in a given photo has consented to its publication.

Read more:
Collins, A. (2019) Forged Authenticity: Governing Deepfake Risks.
Morar, D., Santos, B. (2020) Online content moderation lessons from outside the US