Data collected from users feeds algorithms that are shaping our perceptions about the world and the way we engage with the information by selecting what we can see, read and listen on the web, most of the time with a hidden agenda.
by João Janeiro
Image: Nikhil Raj for Factor Daily
[dropcap]F[/dropcap]or me Artificial Intelligence (AI) has always been around. Since our early years, generation X fellows grew up with the AI concept in movies and cartoons, with some being more convincing than others. For instance, the “talking robot” character never gave me the chills until I first saw the movie 2001: A Space Odyssey. And now that I think about it, it is clear why. It couldn’t be seen or touched, it was an amorphous creature – it was everywhere, managed to control the environment but had no physical form.
Although in the way I envisaged the future as a child, “flying cars” would come earlier than “talking robots”, I’ve grown to see their emergence out of the screen. AI’s algorithms are present in our most mundane activities like searching the web and browsing content online. We engage with them daily, without consciously knowing it.
Citing the late Stephen Hawking, “Artificial Intelligence can be the biggest event in the history of our civilization. Or the worst. We just don’t know”. This uncertainty is closely linked to the fact that there is a discrepancy between technological leaps and social progress. This discrepancy was never as clear in human history as it is now. We are on the verge of having “talking robots” in a decade when news headlines could not be more dystopian. The reason for it sounds too cliché, yet is relevant: no technological advance will make us better people as improvement is something we, unfortunately, can’t delegate since it needs to come from within.
[beautifulquote align=”left” cite=””]school curricula is biased towards science, engineering, and mathematics while non-empirical matters such as ethics, literature and the humanities are sidelined.[/beautifulquote]
Two important pillars of social progress are ethics and moral reasoning, and these emerge from our collective intellectual effort. Nowadays, this effort is being outsourced to algorithms, developed to save us time, keeping our lives simple and efficient, so, at the end of the day, we can have those free hours to swipe through the Facebook newsfeed, update our Tinder profile or surf the web looking for the next big hit to define our “monthly personally”. We worship technology so much that we introduce it to our kids when they are toddlers, we shape school curricula towards science, engineering, and mathematics while non-empirical matters such as ethics, literature and the humanities are sidelined. It’s difficult to foresee social progress in the next generation when considering this scenario.
Technological companies work with this new ‘human’ paradigm and draw their strength from it. All emerging new apps and gadgets have one thing in common: data gathering. Smartphones are the common example—they keep a record of our daily activities, monitor our health, interpret our facial expressions. But they also go way beyond that. All this data feeds algorithms that are shaping our perceptions about the world and the way we engage with the information by selecting what we can see, read and listen on the web, most of the time with a hidden agenda.
[beautifulquote align=”left” cite=””]tech companies cannot foresee all the consequences and applications of their creations.[/beautifulquote]
Huge amounts of money from all sorts of industries flow into technological companies, fuelling investor’s interests, making them bigger than governments, and giving them the ability to hide from the tax collector and increase their profits even further. This is the money that should be available to improve health and education systems all around the world to mass distribute the “red pill” towards a social awakening. Recent scandals, which had revealed how our personal data and a social network were used to undermine and oppress the democratic process in the US and UK, highlighted an issue that should be our concern: tech companies cannot foresee all the consequences and applications of their creations. Tight regulations and techno-ethics are words we should be urgently shouting in the streets.
Would it be possible to break this cycle? Can society be empowered against this invisible threat? A quick journey across the world takes away most of our hope: while China blocks most big technological companies to operate within their boundaries, it uses the same technology to perpetuate a dictatorship regime; Russia relies on cyberwar, cyberespionage, cybersecurity to safeguard its political agenda; the US—elected Trump so we can’t expect much. Europe appears like a distant lighthouse in a storming winter night, planning to create an online turnover tax for big tech firms, implementing a Digital Single Market Strategy, enforcing the General Data Protection Regulation—but how feasible are these regulations in our geopolitical context?
It is paramount to understand that this vicious cycle starts and ends with each of us. We are our first line of defence. We must recognise our own individual responsibility. Becoming active players in the process of social empowerment against this Goliath means changing our values, our priorities and dreams. Do not mistake this with religious propaganda, or a modern oriental mantra, asking us to leave all this behind and find true happiness elsewhere (as long there is 4G coverage). We need to gain control over our data, a democratic goal which will not be achieved easily, but needs to be struggled for daily.
João Janeiro is a ocean scientist, father and part-time baker interested in understanding the mystical forces governing our world.
Jonathan Camilleri says
We are arguing here that information empowers me with knowing, but where do I go with this information?
Essentials says
Some really valid points here, João – AI driven applications and functionality in consumer tech do indeed seem to be irrevocably on the rise, and some of the applications are potentially earth-shattering. The ICE only recently abandoned their idea of using machine learning technology for “extreme vetting” of foreign visitors; the article on it from the Washington Post made our curation list for the topic when it was recently put online.