A decade in review in tech

Cindy Sridharan
15 min readJan 1, 2020

As 2019 draws to a close, I wanted to jot down some thoughts on some of the most important technological adoptions and innovations in tech this past decade. I also look a bit into the future, enumerate a list of pain points and opportunities that can be addressed in the coming decade.

I must add the caveat here that this post doesn’t cover developments in fields like data science, artificial intelligence, frontend engineering and more, since I don’t personally have much experience in these areas.

The Type Strikes Back

One of the more welcome trends of the 2010s has been the resurgence of typed languages. True, typed languages had never quite faded into oblivion (C++ and Java were still dominant in 2010 as they are now), but dynamic languages saw a sharp uptick in usage since the Ruby on Rails movement emerged in 2005. The momentum seemed to have a hit crescendo with the open sourcing of Node.js in 2009, which made Javascript-on-the-server a reality.

As the decade progressed, dynamic languages lost some mindshare when it came to building server-side software. The Go programming language, popularized by Docker and the container revolution, seemed to become better suited for building high-performant, concurrent, resource efficient servers (something the creator of node.js has concurs with himself).

The Rust programming language, introduced in 2010, embraces advances in type theory to provide a safe, typed language. While Rust adoption in industry was somewhat tepid in the first half of this decade, it saw significant uptick in the second half of the decade. Notable examples of Rust usage in industry include Dropbox’s use of Rust for Magic Pocket, AWS’s Firecracker, Fastly’s (now bytecodealliance’s) Ahead-Of-Time Webassembly compiler Lucet and more. What with Microsoft exploring the use of Rust to rewrite parts of the Windows OS, I think it’s safe to say that Rust promises to have a bright future in the 2020s.

Even dynamic languages acquired features like optional types. Typescript introduced optional types, which made it possible to write typed-code that compiled down to Javascript. PHP, Ruby and Python gained gradual type systems (mypy, Hack) which are actually in production use.

Putting back the SQL in the NoSQL

NoSQL is another technology that seemed way more popular in the beginning of the decade than it does at the end. I think the reason why this happened is two-pronged.

First, the NoSQL model, with the lack of a schema, transactions and weaker consistency guarantees proved to be harder to program against that the SQL model. In a blog post titled “Why you should pick strong consistency, whenever possible”, Google states:

One of the things we’ve learned here at Google is that application code is simpler and development schedules are shorter when developers can rely on underlying data stores to handle complex transaction processing and keeping data ordered. To quote the original Spanner paper, “we believe it is better to have application programmers deal with performance problems due to overuse of transactions as bottlenecks arise, rather than always coding around the lack of transactions.”

The second reason is due to the rise of “scalable” distributed SQL databases like Cloud Spanner and AWS Aurora, in the public cloud, and open source alternatives like CockroachDB, which address most of the underlying technical reasons why traditional SQL databases “didn’t scale”. Even MongoDB, which was the poster child of the “NoSQL” movement, now offers distributed transactions.

For situations that require atomicity of reads and writes to multiple documents (in a single or multiple collections), MongoDB supports multi-document transactions. With distributed transactions, transactions can be used across multiple operations, collections, databases, documents, and shards.

Stream all the Things

Apache Kafka was, hands down, one of the most important innovations of the 2010s. Open sourced in January 2011, Kafka revolutionized how businesses handle data. Kafka has been in use at every company I’ve worked at, from startups to large corporations. The guarantees it provides and the use-cases it enables (pub-sub, streaming, event driven architectures) has been used to implement everything from data warehousing, to monitoring to streaming analytics across a wide cross section of enterprises (finance, healthcare, government, retail and more).

Continuous Integration (to a lesser extent, Continuous Deployment)

Continuous Integration (CI) wasn’t invented in the 2010s, but it’s the decade where it become widespread to the point of becoming a part of the default workflow (run tests on all pull requests). The rise of GitHub as a code hosting and development platform and more importantly the GitHub workflow meant that running all the tests before merging a pull request to trunk is the only development workflow many engineers who began their careers in this decade are familiar with.

Continuous Deployment (deploying every commit as and when it lands on trunk) isn’t quite as widespread a practice as Continuous Integration (anecdotally speaking), but with the myriad Cloud APIs for deployment, growing popularity of platforms like Kubernetes which provide a standardized API for deployments and the advent of multi-platform, multi-cloud tooling like Spinnaker that built atop said standardized APIs, deployment practices in general have become more automated, more streamlined and generally speaking, safer.

Containers

Containers was probably the most hyped, talked about, marketed and misunderstood but also one of the most-important pieces of technology to gain adoption in the 2010s. Part of the reason behind the cacophony was the mixed-messaging we were bombarded with from seemingly all corners. Now that the commotion has died down ever so slightly, certain things stand out in more clear relief.

Containers did not become popular because it was the best way to run applications for the wider developer community. Containers became popular because it became the marketing cry for a tool that solved a whole different problem. Docker proved to be a fantastic developer tool which solved the very real problem of “works on my machine”.

To be more precise, the Docker image was revolutionary since it solved the problem of parity between environments and true portability of not just an application binary but also all of the software and OS dependencies. The fact that it also somehow ended up popularizing “containers”, which really is a very low level implementation detail, is probably one of the most puzzling things of this decade to me.

Serverless

I’d argue that the advent of “serverless” compute is arguably even more important than containers, for it truly makes the dream of “on-demand compute” a reality. In the past 5 years, I’ve seen serverless compute routinely expand its scope (by adding support for more languages and runtimes). The introduction of products like Azure Durable Functions seems to be the right step toward making stateful functions a reality (while addressing some of the concerns around the limitations of FaaS). It’d be interesting to see how this new paradigm of compute evolves in the coming years.

Automation Reigned

The operations engineering community probably benefited the most from this trend, since it made developments like “infrastructure as code” a reality. This also was in lockstep with the rise of the “SRE culture” which aims to take a more software oriented approach to operations engineering.

The API-ification of Things

Another interesting development has been the API-ification of a lot of development concerns. Good, hackable APIs enable a developer to build novel workflows and tooling, which in turn helps with maintenance and usability.

API-ification is also the first step toward SaaS-ification of a piece of functionality or tool. This also coincided with the rise of microservices, since the SaaS tool now just became yet another service one spoke to over an API. There’ve been quite a number of SaaS and FOSS tools in areas like monitoring, payments, load balancing, continuous integration, alerting, feature flagging, content delivery networks, traffic engineering (DNS for instance) and more that’ve thrived in this decade.

Observability

It’d be fair to say that we have way better tools now than we’ve ever had to monitor and diagnose application behavior. The Prometheus monitoring system, open sourced in 2015, is perhaps the best monitoring system I’ve ever worked with. While not perfect, it got a significant number of things right (not least supporting dimensionality when it came to metrics).

Distributed tracing was another technology that more people than ever became aware of in the 2010s due to initiatives like OpenTracing (and its successor OpenTelemetry). While tracing is still somewhat hard to use, some of the recent developments in tracing gives me hope that the 2020s would unlock the true potential of trace data.

Looking into the Future

There still exist many pain points that can be better addressed in the coming decade. Here are some of my thoughts on what such pain points are and some potential ideas on how they can be addressed.

Addressing the End of Moore’s Law

The end of Dennard scaling and the slowdown in Moore’s Law call for new innovations. This lecture by John Hennessy makes a good case for why “domain specific architectures” (like TPUs) might be one answer to address the slowdown in Moore’s law. Toolkits like MLIR from Google already look like a good step forward in this direction:

Compilers are expected to readily support new applications, to easily port to new hardware, to bridge many levels of abstraction from dynamic, managed languages to vector accelerators and software-managed memories, while exposing high level knobs for autotuning, enable just-in-time operation, provide diagnostics and propagate functional and performance debugging information across the entire stack, and delivering performance close enough to hand-written assembly in most cases. We will share our vision, progress and plans towards the design and public release of such a compiler infrastructure.

CI/CD

While CI becoming more mainstream was definitely one of the standout advances of the 2010s, Jenkins still remains the CI gold standard.

This space is in dire need of innovation, in the following areas:

  • the user interface (the DSL used to encode test specifications)
  • the implementation details that’ll make it truly scalable and fast
  • integrations with different environments (staging, prod etc) to enable more advanced forms of testing
  • continuous verification and deployment

Developer Tools

As as industry, we’ve begun to build ever more complex and impressive software. However, when it comes to our own tools, it’s fair to say that we could be doing a lot better.

While collaborative editing and remote editing (over an ssh session) have gained some traction, these haven’t quite become the new standard way of development. If you, like me, detest the idea of needing to be connected to the internet to be able to code, coding over ssh on a remote machine might not exactly be your cuppa.

Local development environments, especially for engineers working on large scale service oriented architectures, still remains a pain point. While there are projects trying to solve this problem, it’d be interesting to explore what the most ergonomic UX might look like for this use case.

It’d also be interesting to explore how we can take the concept of “portable environments” to other areas of development like reproducing bugs (or flaky tests) encountered in specific environments or settings.

Other areas where I’d love to see more innovation are: semantic code search, “context-specific” code search and tools that can allow one to correlate production incidents to specific parts of the codebase to name a few.

Compute (The Future of PaaS)

While containers and “serverless” were the buzzwords that garnered the most hype in the 2010s, the spectrum of compute in the public cloud has broadened a fair bit in the recent few years.

This poses a number of interesting questions. First of all, the number of options available in the public cloud seem to be ever expanding. Cloud vendors have the personnel and the resources to easily keep up with the latest and greatest developments in the open source world and roll out products like “serverless pods” (by, I suspect, making things like their custom FaaS runtimes OCI compatible) or whatever the next fad is.

If you happen to be using any of these cloud solutions, then you’re enjoying the best of all worlds, in a manner of speaking. Hosted Kubernetes offerings in the cloud (GKE, EKS, EKS on Fargate etc.), in theory, provide a cloud-agnostic API to run workloads. If you’re using any of the custom cloud products (ECS, Fargate, Google Cloud Run etc.), then you’re probably already leveraging the best features the cloud provider is offering and will most likely have an easy migration path when the cloud provider introduces the next product or compute paradigm.

Given how rapidly this spectrum is evolving (I’d be very surprised if there aren’t at least 2 or 3 more such options in the next couple of years), it’s going to be incredibly difficult for small internal platform teams (infrastructure teams at companies whose job it is to build an on-premises platform to run workloads) to keep up when it comes to feature parity or ease of use or general reliability. While the 2010s pitched Kubernetes as a toolkit to build a PaaS (platform-as-a-service), to me building an internal platform on top of Kubernetes that will offer the same degree of choice, ease and freedom as available in the public cloud seems pretty quixotic. Going all in with a “container” based PaaS as the “Kubernetes strategy” is tantamount to willfully restricting yourself from leveraging the most innovative features of the cloud.

When I look at the compute options available today, I think rolling your own PaaS purely on top of Kubernetes is akin to painting yourself into a corner. It doesn’t come across as very forward thinking. Even if one painstakingly builds a container based PaaS on top of Kubernetes today, in a couple of years it’s going to end up looking outdated compared to what’s possible in the public cloud. Kubernetes might’ve begun its life as open source heavily inspired by an internal Google tool, but that tool was originally designed in the early to mid 2000s when the compute landscape looked very radically different.

Furthermore, in very broad brushstrokes, it’s not the core competency of enterprises to become experts in running a Kubernetes cluster anymore than it is to build out and maintain custom data centers. It’s the core competency of cloud providers to provide reliable compute substrate.

Finally, I also feel we’ve regressed a bit as an industry when it comes to the user experience. Heroku launched in 2007 and to this day remains one of the most easy to use platforms. While Kubernetes is undoubtedly way more powerful, programmable and extensible than Heroku, I do miss how easy it used to be to get started with and deploy to Heroku. If one knew how to use git, one could use Heroku.

Which brings me to my next point: we need better, higher level abstractions (especially the highest level abstraction) to work with.

The Correct Highest Level API

Docker is a perfect case study for the need for better separation of concerns while getting the highest level API right.

The problem with Docker was that (at least initially) “Docker” stood for way too much, all under the guise of solving the problem of “works on my machine” using “container technology”. Docker was an image format, a runtime with its own virtual network, a CLI tool, a daemon running as root and much more. The messaging, if anything, was more confusing, what with talk about “lightweight VMs”, control groups, namespaces, myriad security concerns and features interspersed with the marketing cry of “build, ship, run any app, anywhere.”

As with all good abstractions, it takes time (and experience and pain) to separate the various concerns into logical layers that can be composed together. It didn’t help much that before Docker could reach this point of maturity, Kubernetes came into the fray and monopolized the hype-cycle so consummately that one was now trying to keep up with all the developments in the Kubernetes ecosystem in addition to the “container” ecosystem.

Kubernetes, in many ways, shares similar problems to Docker, in that for all the talk about how it offers a fantastic and composable abstraction, the layering of various concerns is not particularly well encapsulated. At its heart, it’s a container orchestrator, which runs containers on a cluster of machines. This is a fairly low level concern, applicable only to engineers operating a cluster. Kubernetes is also the highest level abstraction, a CLI tool which users are supposed to interface with via yaml.

Docker was (and remains) a fantastic developer tool, whatever its other flaws might be. While it tried to do and be a lot, it got the highest level abstraction right. By the “highest level abstraction”, I mean the subset of functionality the target audience (in this case developers who spend most of their time on their local development environments) were truly interested in worked well out of the box.

The Dockerfile and the docker CLI tool must become a case study in building a good “highest level UI”. A lay developer could get started with Docker without knowing or understanding much about the implementation details that contributed to the operational experience such as namespaces, control groups, memory and CPU limits etc. Writing a Dockerfile, ultimately, wasn’t terribly different from writing a shell script.

Kubernetes has multiple audiences:

  • cluster admins
  • infrastructure software engineers extending Kubernetes and building a platform on top of it
  • end users who interact with Kubernetes with kubectl

Kubernetes’ “one API fits all” approach presents a mountain of complexity that isn’t encapsulated well enough, with no guidance on how to scale the heights, leading to a learning curve that’s unnecessarily steep. As Adam Jacob put it:

I’d argue that a lot of infrastructure technology today is too low level (and ergo, deemed “too complex”). Kubernetes is fairly low level. Distributed tracing as it’s implemented today (a bunch of spans stitched together to form a traceview) is too low level. Tools built for developers that nail the “highest level abstraction” tend to be the most successful. The corollary is true in a surprising number of cases (if a technology is too complex or hard to use, then the “highest level API/UI” for that technology is yet to be discovered).

Right now the cloud native ecosystem is a bit of an embarrassment of low level riches. As an industry we need to innovate, experiment and educate more on what the right “highest level abstraction” would look like.

Retail

One industry that’s not seen much change in the digital experience the 2010s is retail. While on the one hand, the ease of online retail has sounded a death knell to several bricks-and-mortal retail stores, on the other, shopping online still remains fundamentally similar to how it was a decade ago.

While I have no particular insights on how retail should evolve in the coming decade, I’d be thoroughly disappointed if in 2030 we were still shopping the way we do in 2020.

Journalism

I’ve been feeling increasingly disillusioned about the state of journalism in the world. It’s becoming more and more difficult to find non-partisan news outlets that accurately report the news. Every so often, the line between news and comment gets blurred. News is often reported through lens of partisan glasses. This is especially true in certain countries where the historical division between news and comment hasn’t existed. In a recent article published after the most recent the UK general election, Alan Rusbridger, the former editor of The Guardian wrote:

The one over-riding thought is that for many years I looked at US newspapers and pitied colleagues there who “just” ran the newsroom, leaving comment pages to others. Pity has turned to envy. I now think it would be cleansing for all British national newspapers to split the responsibility for news and comment. It’s simply too hard for the average reader — especially, but not only online — to tell the difference.

Given Silicon Valley’s less than stellar track record when it comes to ethics, I’d be the last person to trust tech to “disrupt” journalism. That said, I also feel myself (and several others I know of) would love for there to be a news outlet that’s trustworthy, impartial and disinterested. I have no further insights or ideas on what such a platform might look like, except my conviction that in an era where the truth is becoming increasingly harder to discern, there’s now a need more than ever for honest journalism.

Social Media

Social media and crowdsourced news seems to be the primary source of news for many people across the world, and the lack of accuracy and the unwillingness of some platforms to so much as do a basic fact check has resulted in, among other things, genocide, interference in elections and more.

Social media is also more powerful a medium than ever before. It’s changed the political discourse for the better and for the worse. It’s changed the way brands market to customers. It’s changed popular culture (with recent developments like “cancel culture” being almost entirely social media driven). While detractors claim that social media has been a conduit for rapid and capricious change in “moral fashions”, it’s indubitable that social media has provided a platform to folks from marginalized groups who’ve erstwhile not had one. In essence, social media has changed the way communication happens and the way people express themselves in the 21st century.

That said, I’ve also grown to believe that social media abets the worst human impulses. Nuance and thoughtfulness often gets sacrificed on the altar of a hot take, and it’s becoming next to impossible to respectfully disagree with certain opinions or stances. Polarization often runs amok and it all but guarantees certain views are becoming harder to be heard while the absolutists police matters of etiquette and acceptability online.

It’s not clear to me whether it’s possible to build a “better” platform which optimizes for higher quality discussions, not least because what drives “engagement” is often what profiteers these platforms in the first place. As Kara Swisher writes in the New York Times:

There are ways to foster digital interaction that do not have to incite rage. The reason much of social media feels so toxic is it has been built for speed, virality and attention grabbing rather than for context and accuracy.

It would indeed be regrettable if a couple of decades down the road, the lasting legacy of social media was the erosion of nuance and civility in public discourse.

--

--

Cindy Sridharan

@copyconstruct on Twitter. views expressed on this blog are solely mine, not those of present or past employers.