2020s Retrospective: The Decade of the Digital

2020s Retrospective: The Decade of the Digital

Author: Mukund Shyam

Published on: 05 02 2026


Note: This is a piece I intend on turning into a more comprehensive video at some point. Please drop your thoughts and feedback down in the comments!

The 2020s have been, at the very least, an interesting decade. It started out with a bang—with, of course, the largest pandemic in a hundred years—and continued to get more and more crazy (even as the pandemic faded). One of the more interesting developments of this decade, though, is the fact that technology has entered public discussion in a way it hasn't before. For the longest time, technologists (and technology companies) were somewhat marginal forces in society-building; their image (however untrue) was one of a scrappy company of nerds trying to make transformative technology simply for the greater good.
This image has all but died in the 2020s; technology companies are very much in mainstream discourse. This is, of course, for good reason—tech companies are the most wealthy companies in the world and a few massive tech firms basically singlehandedly hold up the American stock market (and by extension, the world economy) today. Furthermore, people are beginning to understand the influence of technology companies on people's behaviour (especially as media companies and ad platforms) and thus are beginning to scrutinise the decision making within these firms.
Furthermore, technology and tech companies have become tied in with political identity in a way in which they have not before, at least not in the recent past. More on this later.
In other words, the "real world" became tied up with the digital in a manner that is now nigh on impossible to ignore.

Data Centres and Capital Good Flows

If one is to talk about technology development in the 2020s, starting at the topic of data centres is not a bad idea. Arguably the most divisive and politically charged topic of technological development in the last year or so, the question of data centre buildouts is perhaps the best example of how contemporary technological change became intertwined with particular political positions.
I imagine prior to now, data centres were the concern exclusively of startup CEOs and (optimistically) nerds; now, everyone (a) knows what they are, and (b) has an opinion about them. Largely, the discourse surrounding data centres have been around energy and water use, as well as the economics of data centre investment (albeit the latter in fewer circles).
I don't think this is a bad thing, though. There is some value in knowing about the world around you; particularly, if one is in a community that is especially set to be affected by data centre construction.

When talking about data centre investment, it would be remiss of me to not mention the intimate link between silicon and geopolitics. This is not a particularly new link—capital good flows have always been influenced by geopolitical considerations—but the recognition of silicon (and in particular American silicon designs) as being a major determinant of technological domination in the digital age seems to be a more 2020s fact.
Silicon development (and questions surrounding computer chips in the geopolitical sphere) is especially interesting to me in the context of the 2020s given the discourse (largely among industrialists and tech bros) about what to do about China's growing presence in areas like software (see also: TikTok) and AI. Some people argue that America should completely block the sale of Nvidia chips to China to prevent it from challenging the US's dominance in the AI world; while others argue for a weird drip-feed kind of policy, where chips are being sold but at lower levels and lower capabilities such that China becomes reliant on American technology without letting it overtake the US in terms of capability. China, in what is assuredly a brilliant political manoeuvre, has decided to continue to develop their in-house designs despite today having relatively easy access to good American chips.
China seems to have understood the importance of technology (and in particular tech infrastructure) to geopolitical might much quicker than the US has. They have already dominated access to a number of cobalt and lithium mines in central Africa and South America, their vast high-quality manufacturing infrastructure has given them a major head start in solar power, and their complete monopoly over rare-earth mineral processing and development has made them a power you simply cannot avoid if you are in any way involved in technology.

Interestingly, the push-and-pull between the US and China has created a major vacuum for a non-Chinese high-quality technology manufacturing industry to serve American markets, which was filled rather quickly by Taiwan.
Taiwan is the biggest microchip manufacturing country in the world, and this has afforded them a super important niche in global markets. Tt seems like Taiwan's prowess has made them even more important to the mainland Chinese (government's) cause of reunification, since access to Taiwan's high quality silicon manufacturing industry would be a major win for China, while also giving them enough power to necessitate protection from the US in particular since the latter doesn't want to risk losing their wafer factory to China.
As much as I find the dynamics between these countries interesting, though, I don't think this is particularly good for the world-at-large. In an age where the fate of the post-War world seems to be more precarious than ever, I don't think deeper economic rivalries between countries is particularly good. I think it really hurts the ability for us to sustain a globalist world order (at least, insofar as the current world order is globalist; obviously there's good debate about this point as well).

The Decade of the Polarisation of Tech

Now, I want to make clear before I get into what I mean by the phrase "technology has become political". I am not suggesting that the integration between tech and politics is especially new or radical. I don't think it is; I think politics touches all and tech certainly falls within that. Social forces influence absolutely everything.
Furthermore, I don't even think that this is something that is purely a symptom of the 2020s. That's obviously not the case; even in my lifetime we've seen a deep intertwining of tech and politics with the Arab Spring and the Cambridge Analytica fiasco. This stretches even further back, too—people who were ostensibly technologists were tremendously involved with the military campaigns of Nazi Germany. And I imagine this stretches all the way back to the enlightenment with the printing press (although I haven't really looked into it yet).

What I do think is a major shift, though, at least in recent political history, is the fact that technology has somehow become entwined with certain political identity markers. Again, this is certainly not an entirely new phenomenon—particularly in the context of workers movements through the arc of history, many identities were defined in their opposition to technological developments, most famously the (original) Luddites—but I think in the last 50-or-so years technology (and in particular the internet) has been seen largely as entities of neutral political valence. Tech, so to speak, was largely seen as a tool to be used by political groups to serve their own ends.
Now, though, it seems like technology and certain technology companies and products in particular have tied themselves to specific political identities. There is an anti-big-tech strain within certain circles in both right-wing and left-wing politics, and simultaneously a more neutral or perhaps pro-big tech strain in both as well; thus to be pro- or anti-big tech has a link to a specific ideological position.

More weirdly, to be a Tesla owner has now become synonymous with being a right-wing nut job. If you are on Bluesky, you are a blue-haired lefty; if you're on Twitter (or X, the everything app), you might as well be a Nazi.
This is not to say that these are unfounded or even exaggerated—it does seem like Bluesky tilts further left than Twitter does, but I don't know if the average Tesla owner is a right-winger—but my point is simply that political identity markers have become tied up with people's relationships with technology in a way that is somewhat new.
This is perhaps best explained through the whole AI development thing. One's relationship to AI has become avidly political—if you are pro-AI you are right wing, and if you are anti-AI you are leftist. Perhaps more problematically, you cannot be a pro-AI (whatever that means) leftist; that just does not fit in the scheme of modern politics somehow.

In this manner, the link between politics and technology has changed in the last few years; and this change is defining a lot of contemporary political discourse as well.

AI, labour, and human brains

With any technological revolution comes a major change in the dynamics of the labour market and it seems like AI is no different. But to me, the changes are more due to the promise of AI rather than its actual effects.

I don't doubt that people use AI—I mean, it's definitely useful in certain contexts—but the extent to which it has become a necessary part of modern knowledge work is, I think, at the very least under question. It seems to me that a large part of the impact on the labour force in the knowledge work market right now is mostly due to the fact that so many venture capitalists (and institutional investors) see AI as a transformative technology despite it not doing anything much right now. And so, a lot of corporations and executives are under pressure to show that they are using AI, meaning a lot of people are experiencing the potential downsides of AI without the upsides (in particular, entry level workers are losing jobs).
Whether it is causing any major change in the way people work right now is still a major question mark, in my opinion. Even in coding jobs (arguably the task LLMs are the best at), whether or not LLMs are helpful is at the very least contested. I agree with journalists at Wired, who suggest that AI seems to be a simple scapegoat for executives to cut manpower in an economic environment that is already very unstable. It's not that AI is itself causing the layoffs—they were perhaps on the cards to begin with—but AI offers a good (and perhaps more importantly, investor-friendly) reason for these layoffs to take place.
What I think is certainly true, though, is that these changes are not even close to ending. The transformations set to take place in the labour market—whether in the context of knowledge work, hiring, or creative work—are going to be, I imagine, major; but we simply don't know what these changes are going to be.

Now AI, just like computers and calculators before it, seems to be creating a major change in the way people's brains work. (Side note: I think everything changes how people's brains work? But I suppose that's just a technicality).
Computers, in the recent past, have made memory somewhat useless; what people (in my experience almost exclusively AI doomers and AI glazers) seem to suggest is that AI is going to make thinking somewhat useless. Now, if this is true, this is certainly going to make a major change in how people live.
Now, I think this is largely not going to happen; I don't think we have the technology yet to make robots as good as humans at thinking, and even if we do, I don't think that humans will need to stop thinking. Maybe it'll mean a change in the kind of skills we deem valuable (both socially and economically) but I don't think it'll spark an end in thinking as a skill.

What there is already a major shift in, though, is education. AI (in particular, large language models) is, in my opinion, extremely good at one thing and one thing only—cheating. AI is a ridiculously powerful cheating machine; it is stupidly good at taking existing information and synthesising it in a decent and (more importantly) somewhat novel way, meaning one cannot keep track of its use.
Of course, people have attempted to solve this problem—AI checkers come to mind—but in my experience it simply doesn't work the way it's supposed to. It is far too easy to use LLMs in a way that gets you most of the way there, saving you a huge chunk of time. It has created a weirdly perverse set of incentives, where because one largely gets decent grades with AI-written stuff without much work, the incentive is against making people do the work they ought to. It has also put far too much power in the hands of teachers and professors, allowing them to effectively decide simply based on vibes whether something is "legit" writing or AI. It's just nuts, and is completely breaking the system of education we currently have in place; and I don't see a very easy way of solving it.
The effect of technology on young people is already really quite well documented (if quite understudied) and it seems to me that AI is simply making that, broadly, kind of worse.

Intellectual property and creative work post-ChatGPT

If we are to start talking about AI, though, the whole tech-society link becomes a whole lot more interesting.

AI is a major threat to copyright and IP laws. This is perhaps extremely obvious, but I still think it's worth stating out loud.
The whole method of AI training has created a major split in interpretations of IP law. One states that training is much akin to human inspiration (nothing completely new and all that), while the other states that training (at least without permission and/or compensation) is theft.
You may have your own opinion of what it is (and it likely is correlated with your political opinions, see above). But it is certain to be a major, major problem in jurisprudence. It also has major political implications—in the same way property ownership rights protect both rich and industrial landholders while simultaneously protecting small-scale farmers, IP rights benefit both major corporations (like Disney, notably) and small-scale artists.
What is to happen? Will we see a complete abandonment of the idea of intellectual property ownership? Is that a good thing? Whatever happens, I think a sea change in this field is almost certain.

Discussions about AI will be notably incomplete if the impact of generative AI on the media industry is not brought up. There are quite a few cases in which generative AI has (and perhaps will continue to) change the way in which media is done in the present day.
For one, you're already starting to see artists (and in particular artists with a specific unique skillset) lose jobs in favour of generative AI. A good example of this is the (reasonably creepy) completely AI-generated Coke ad; I imagine it cost a lot less to the Coca-Cola company and to the advertising company to make that ad than a "normal" human created advertisement. Largely because of this—because it's just so cheap—I imagine this trend is going to continue at least into the near future.

Something to note is that the advertising industry relies heavily on novelty for particular brands and products to separate themselves from the rest. One consequence of this may be some brands returning to using human work for advertisements to separate themselves from the same-y bland AI generated aesthetic that I think will become more and more common in the years to come.
More broadly, though, I think the introduction of generative AI to media environments has made the skillsets required for success change; I think nowadays, given the pervasiveness of generative AI for writing and summarisation, writing decently well isn't an amazing competitive advantage anymore. Similarly, specific technical skillsets like visual effects and whatnot may not be as employable as they are right now in the years to come.

Of course, people have acknowledged it. Tons of people (including creatives!) argue that more than ever people are now distinguished by their taste, not their skills. People who cultivate good taste will end up being in a really good spot to have a creative career.
To be clear, I don't think this is necessarily a good thing, but it is something to note. It's also a shift that has been taking place for a good while now; with the internet and computers, access to publication, skills, and an audience has never been easier. And so a shift from technical skills (like recording engineering, filming, visual effects, etc.) towards more taste-based skills (mixing, colouring) is, I think, bound to follow.

Speaking of media, one cannot ignore the influence of AI on journalism. Again, I think good journalism will survive; good reporting and story writing that is grounded in the real world doesn't really have any major form of competition. But equally, I think there is a lot of scope for crappy pieces of AI-generated work to flood the market crowding out space particularly for junior writers.
Another impact on journalism is the general shift in people's behaviour from using search engines (i.e., using platforms like Google to find something they are looking for) to answer engines (i.e., using LLMs to get answers to the questions they want). People don't search for links anymore, they search for answers. This might spark a shift away from the ad-supported models that have defined (or at least partly supported) modern internet journalism for decades now. I suppose we're already seeing this shift; outlets like The Verge have already transitioned towards subscription-first models.

Something that remains to be seen is the impact of deals made by news outlets with AI companies. Obviously, companies as influential as The New York Times can afford to negotiate with OpenAI to obtain compensation for their IP; but some major questions I continue to have is (a) if media houses ink exclusive IP deals with AI firms, does this create differences in what data particular AI companies have access to and by extension what kind of outputs they generate, (b) do companies respect these deals (and more broadly, the robots.txt files), and (c) what is to be said about smaller journalists and writers?
The final question could perhaps be answered by an initiative by Cloudflare, who has created a product which (a) prevents AI crawlers from accessing websites and (b) providing financial compensation to websites that are crawled. I did feel like these were problems that could only be fixed through regulation, but it seems like technical solutions could also play a role.

More on social media

One of the more interesting pieces of technology discourse over the last two or three years is whether social media should have age limits. Some countries have recently already put such policies in place: Australia, has already banned phones in schools, and the UK has begun to age-lock certain websites. Something like this has already been in place in China, though; access to video games have been time-limited by the government there for a good while.
Discourse about this has evolved somewhat quickly as well, particularly after these policies have been put into place. I feel like the dominant belief for a long time has been that (a) phones are bad in some way, and (b) there needs to be regulation surrounding them. Rapidly, though, the discourse seems to have shifted in the other direction; at least within a certain portion of the internet. People have begun to question the effects of technology bans for kids, arguing that technology has positive effects which cannot be ignored (like improving access to educational resources and community; the latter, especially, has been brought up as a major benefit particularly for queer kids in areas without a physical queer community they have access to).
Another question I have is how this will transform the social lives of children of school age; the tremendous increase in phone access and use has already changed the social dynamics of school age in a very real way. Does changing regulation around phones in schools change this? Or does it simply move the perverse aspects of social media fully into the home?

I still think most people (at least, on its face) agree that technology needs to be regulated, particularly for children. I, like many, am interested in the impact of short form vertical video on kids; short form vertical video seems so singularly designed to prey on people's lack of willpower and thus seems to be especially effective (insofar as effectiveness is defined as the ability to hold on to users' attention) on children and so regulation seems to be a good thing to think about, at the very least.
The issues, though, seem to be largely concerning the means of execution, particularly in the case of the UK where ages are confirmed using AI powered facial recognition. This doesn't feel right; it (a) is reliant on firms rather than government machinery (b) further legitimises the (already crazy) level of surveillance carried out by big tech and (c) creates a perverse incentive for tech companies to create bad AI face-detectors (since this gives them access to more users while also providing plausible deniability if they are brought to court). While this is principally perhaps a decent idea (or at the very least something worth testing out), the way in which this has been carried out has certainly been anything but positive.

I think discourse about short-form vertical video will not be complete without dealing with the question of radical political movements. This trend—of more and more insane right-wing ideology becoming accepted in the mainstream, and gaining increased favour especially among young men—is not simply a function of social media; I'm sure other sociological factors (the slipping economy, the inequitable access to resources, the disintegrating democratic machinery) play a huge role in the move of people like Andrew Tate and Nick Fuentes out of niche internet spaces. But certainly part of it is down to the fact that (a) social media allows literally anyone to say anything they want to, (b) the immense reach of short form vertical video letting insane ideas be consumed so frequently that people stop feeling like it is insane in any way, and (c) media algorithms creating an incentive for outrage. It seems to me that this is output very much informed (and created) by our present moment.

More generally, though, I think social media algorithms in particular have accelerated a kind of anti-intellectualism that is perhaps best stated as "The Death of Nuance."
The fact that these algorithms favour engagement over all else has incentivised a kind of discourse culture defined exclusively by outrage; people stating more and more radical positions simply because it's the only way one can elicit a reaction (and thus, engagement).
Simultaneously, nuanced positions are deprioritised in the public eye. Nuance is difficult, it takes energy to understand a complex position. Why would I, as a consumer of content, spend that energy when someone else can explain the same events in a more bite-sized way (even if it's radical)?
In some sense, I think this observation has defined the post-COVID internet experience; the way the internet feels to use on a day to day (as well as the general impacts it's having on society nowadays) is in many ways downstream of this death.


Predictions for the rest of the decade

I have a few ideas about what will take place in the rest of the decade in the tech world. Some of these predictions are probably gimmes—they're very likely to happen—while others are more long-shots.

SynthID and Content Credentials achieve increased adoption—particularly by the media industry—and yet fails

Technical solutions for recognising and keeping track of AI-generated content (images and videos in general) are bound to develop and gain increased adoption—SynthID (by Google) and Content Credentials (promoted largely by Adobe) already exist—but I don't think this'll prevent the misinformation concerned with AI generated content.
Part of this is because I think there will always be some kind of way to get around these markers (such an arms race already exists concerning AI text detectors in colleges), but part of it is also because I don't think people will know enough to use these markers effectively! OpenAI's Sora videos have pretty noticeable watermarks in them, and yet, anecdotally, they are being shared as if they're fact.

The AI bubble bursts, and takes with it AI-only social media feeds (a gimme)

This is a major gimme, in my opinion. The level of investment in AI is simply unbelievable, and a lot of these investments are weirdly circular (with a whole bunch of companies relying on each other for liquid cash to fund development). This is not to say that the technology will disappear—I think the bubble bursting will be accompanied by what is going to be one of the biggest shifts in technology ever.
One of the issues concerning AI right now is the fact that we don't really know what to do with it. Everything that it's been used for so far has not really fundamentally changed the world or anything, at least financially; it's been more hype value than anything else.
Once investors realise this—that hundreds of billions are being spent on technologies that we don't know what to do with—the bubble pops. And perhaps the biggest example of the fact that we don't know what to do with AI (i.e., AI generated video feeds) goes along with the pop.

Anti-short form counterculture dominates YouTube

Again a bit of a gimme; we're already in some spaces seeing a shift away from the engagement farming Mr Beast style videos with the latter being criticised quite heavily for their style of video. I'm simply riding this wave; people will recognise that engagement baiting takes place on short-form video too (without much monetary gain for individual creators) and thus, a counterculture movement will take place against short form video (at least insofar as a platform wide movement can take place in the modern landscape of YouTube).

A major technological development is socialised in an OECD country.

It seems inevitable to me that either the internet or AI will be socialised in an OECD country.
The former has already been pseudo-socialised in a number of OECD countries, and so becoming publicly owned in yet another country seems reasonably plausible. More radically, though, I think at least one country will socialise and publicly own an AI model. I don't really know what that means, technologically speaking, but I'm still willing to make that bet..

Chatbots get good at writing school essays

The one thing chatbots right now are extremely good at is cheating. This problem (i.e., people using chatbots to cheat at writing essays) is simply going to get worse. Some schools will get ahead of the curve and stop using AI detectors (as many have already), and will simply expect more from their students! If an AI model can write at a certain level, students will be expected to be better—more clear, more articulate, more interesting.

At least 2 Big Tech companies are split due to antitrust concerns

This sounds like a gimme—after all, Big Tech companies are simply enormous and concerns about their monopoly power have been in the public consciousness for the last few years. But I don't think it's as inevitable as it seems.
For one, basically every Big Tech firm is based in the US, where antitrust investigations seem to be all but dead nowadays. Further, the rise of huge startups like OpenAI and Anthropic has taken the heat off of Big Tech, at least for the time being (see also: the Google case concerning Chrome).
Yet, I think two companies will be forced to split. Google is almost certainly to be one of them; they will be forced to sell off either Chrome or their ads business (and this is especially if Gemini maintains its spot among the top 3 AI models). The other is likely to be Meta; their AI lab has been acquiring a bunch of startups and their ads business is just as (if not more) important as Google's.

Making-of content gains immense traction.

I would like to say I'm ahead of the curve with this prediction, but I might just be riding the wave.
With AI-generated content (and the negative responses of consumers to it) becoming more and more commonplace, people will be forced to find better ways to prove that their is (a) their own and (b) made with real human effort. And so, even big budget shows and movies will release behind-the-scenes and making-of pieces alongside the show or movie.
You're already seeing this with the new Apple TV mnemonic; it was incredibly well received because it was made entirely practically (and this behind-the-scenes process was released as well).

India's technology services industry collapses (in some form).

Somewhat vague, but I want to keep a number of possibilities open!
I think India's tech industry is especially vulnerable to advances in AI. If LLMs get good enough at coding, the skills that have defined India's advantage in the tech and IT industries worldwide will cease to be one nearly overnight. Obviously, companies in India will take a while to respond to this shift—and the best (or biggest) ones will be able to adapt—but it will certainly transform the current skilled labour market in India by making engineering (as a field of education) not as job-friendly as it was before.

Data Centres accelerate the transition to green energy

Perhaps a bit of an optimistic prediction (especially given the present moment where data centre buildout is affecting our ability to stick to our carbon goals in a very real way), but I am quite convinced that this will take place.
This is not because I think companies are going to be particularly carbon-conscious or anything; it's simply because (a) big tech companies don't want to be reliant on an unreliable grid or fluctuating oil prices and would much prefer to "control the stack", as it were (b) renewable electricity today is cheaper than fossil-fuel powered electricity, and (c) companies are already acquiring old nuclear power plants and making deals with companies that make micro-nuclear reactors, suggesting a commitment to nuclear energy to power their data centres.


To conclude, then, the 2020s have so far been the setting of a number of major shifts in both technology and public responses to it. It has moved technology and tech companies into the mainstream, and has made the industry overtly intertwined with concerns of society writ large.
Thank you for reading! Drop your thoughts down in the comments on Substack or send me an email at hi@mukundshyam.com!


Copy link

copy link

Envelope

subscribe now on substack

All posts