The last decade has brought some of the most rapid-fire changes to the engineering world that many of us have ever seen. Each new innovation happens faster and builds on top of previous innovations, much like Ray Kurzweil describes with his Law of Accelerating Returns. This two-part article will explore a handful of new engineering practices and patterns that have emerged in the last few years that seem poised to forever change the way we build and use software.
In a my previous article DevOps Engineer: The Hottest Job of 2020, I discussed a little about the history of virtualization and, more specifically, serverless computing. Serverless computing is one of the hottest trends of the last half decade, although in many ways it's still in its infancy. It allows engineers to build software much more rapidly and with less server-side knowledge than is possible using Docker or previous virtualization strategies. Serverless applications also involve zero administration and offer autoscaling features that are not possible using any other deployment strategy.
At present, the best way -- and, I daresay, the only way -- to do cross-platform, cross language serverless application development is using the Serverless Framework. By cross-platform, I mean that you can build your application once and deploy it to AWS, Google Cloud, Microsoft Azure, IBM Cloud, and a number of other public cloud platforms. You can also deploy it on-premise in a variety of ways, including Kubeless and Knative. By cross-language, I mean that it's very easy to build applications using the Serverless Framework in a variety of languages, including Node.js, Python, Go, PHP, Java, and others. This extreme flexibility in terms of cloud platform and programming language is what makes the Serverless Framework the only game in town for serverless development in 2020.
The Serverless Framework includes a command-line interface that makes it easy to scaffold new projects. For example, the commands below will generate a Node.js serverless project and deploy it to AWS within seconds:
serverless create --template aws-nodejs --name my-special-service
You could just as easily scaffold that same project in Python and deploy it to Google Cloud Platform (GCP):
serverless create --template google-python --name my-special-service
These projects don't perform any real function and so I'm simplifying a bit, but the point I'm trying to make is that the CLI that is built into the Serverless Framework can scaffold a project in a wide variety of languages and deploy it to a wide variety of cloud platforms. The engineer can use the language and cloud platform they are most familiar with, and easily switch to another cloud platform or even a different language down the road if required by a different project. The learning curve involved in changing languages or cloud platforms using the Serverless Framework is much lower than with other development approaches.
Once you've scaffolded a project, the way you think about the app is pretty much the same regardless of what language or cloud vendor you are targeting -- i.e., you are building functions that respond to events. You don't have to think about things like what Linux distro you're running on, what packages you need to install on the server, load balancing, how many instances need in a cluster, etc. All of that vanishes with serverless and the only thing you are concerned with as an engineer is the events your application will respond to and the functions that will respond to them.
This is an entirely new paradigm in backend engineering and it will undoubtedly change the way we build software in the next decade. It frees up software engineers to focus on building amazing user experiences and powerful business logic without needing to worry about the infrastructure their application will run on or how it will get deployed.
Wikipedia defines test automation as follows: "In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually."
While test automation in its modern form is relatively new, software testing dates back decades. Some of the earliest notions of software testing arose as early as World War II when Alan Turing presented his famous Turing Test as a way to test the "intelligence" of software. During this time, computers weren't interconnected as they are today, and there were very few programming languages with which to write software. Companies and governments didn't bother to ensure the quality of their software or enforce standards of quality programatically.
After IBM and Apple revolutionized the technology industry with the introduction of personal computers in the early 1970s, there was an explosion in the number of programming languages , and it gradually became standard practice to employ people and tools to ensure that an application operated and performed as expected.
These days, test automation often focuses on (1) Graphical User Interface (GUI) Testing and/or (2) API Testing. Most of the early efforts at testing GUIs and APIs were manual, requiring humans to click around and identify problems each time the code changed or use API testing tools to validate API responses. These approaches were error prone as they relied on humans to perform repetitive tasks that require a high degree of precision, and it didn't take long for test automation tools and frameworks to appear that offloaded this kind of work to computers.
Modern test automation strategies generally fall within one of the following categories:
While these tools and frameworks represented a huge improvement over manual testing by humans, they introduced many of their own set of problems, including:
A newer approach to test automation, called "codeless testing", is poised to change the way we test software in the future. With codeless testing, test automation engineers don't need to have coding skills. They can simply "record" tests by clicking around in an application, and their activities are converted the the necessary test specifications required to replicate their actions. Some of the best codeless testing tools involve nothing more than installing a browser extension and then clicking a "Record" button to start recording a test. One of the best examples of this is GhostInspector.
With GhostInspector, you can record highly complex UI tests that exercise all aspects of your application. GhostInspector even supports testing email confirmations, file uploads, and visual regressions to ensure that CSS and other styling elements don't change unexpectedly as a result of a code change.
Another tool that offers codeless testing is Percy.io. Percy is more focused on testing visual regressions, but an approach like this also works for catching API and logic regressions since such regressions usually present themselves in the form of UI regressions.
Codeless testing tools like this can easily be integrated into CI/CD pipelines so that they are triggered post deployment. So, for example, after you deploy code to your testing environment, your pipeline can trigger your GhostInspector test suite, which will run through a full battery of tests, exercising all aspects of your application. If the test suite passes, you know that you can deploy the application to production without much risk that a user will encounter something unexpected. When code changes, a test automation engineer can simply re-record which ever test scenario broke as a result, which often takes less than a few minutes and can be done by test engineers who have no coding skills.
In this new codeless testing era, a common practice is to run linting tests as part of your pre-deployment checks. Then build and deploy the app, and then run your codeless test suite after the app is deployed to your test environment. This approach ensures that the application adheres to your code style guidelines and is free of syntax errors and also functions as expected from the perspective of your users. These codeless testing tools can also be used for performance (or load) testing. Each scenario can be run multiple times in sequence with different data inputs to simulate concurrency across a wide variety of user behaviors.
It's not unrealistic to envision a future where computer programs can create other computer programs, and the need for human software engineers all but vanishes. We're a good ways off from that future, however, and in the meantime the trend seems to be towards humans writing less code (and requiring less coding skill) to create sophisticated software applications. One paradigm that has emerged recently along these lines is "no-code" or "low code" programming.
For over two decades, we've already had WYSIWYG editors that allow web developers to use drag-and-drop methods to build out web pages and entire websites. The WYSIWYG editors generate the HTML and CSS required by web browsers, requiring very little coding effort by the developer. This approach often produces bloated code that is hard to read and edit by humans, but it definitely works and it's an approach that is in widespread use today with CMS solutions such as WordPress, Wix, Weebly, and e-commerce platforms like Shopify, BigCommerce, and others.
Creating beautiful user interfaces via drag-and-drop methods is a lot easier than building APIs and backend functionality using visual tools, however, and it wasn't until recently that no-code/low-code approaches have become feasible on the backend or for the entire application. Some of the most notable examples of no-code programming for backend and fullstack development are Node-RED and Bubble.io.
As described by Node-RED itself, "Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the palette that can be deployed to its runtime in a single-click."
Node-RED is built in Node.js and makes it possible to do everything from building REST and GraphQL APIs and automating email confirmations to configuring hardware and networks, and any number of other backend tasks. All of this can be done via a drag-and-drop web interface. Node-RED also has a plugin architecture that makes it easy to use pre-built "flows" created by third-party developers.
This is a powerful step in the direction of giving non-technical users the ability to build fullstack applications without any coding skill at all. Approaches like this will undoubtedly become more widely used in the future as many workers in other sectors transition to careers in engineering.
Another innovation poised to transform the world is WebAR and WebVR. Most of us have read about, heard about, or had some kind of contact with "augmented reality" (AR) or "virtual reality" (VR).
Augmented reality applications essentially "augment" video streams by overlaying them with text and graphics. While a user views the real world around them, their vision is augmented with objects that provide additional information about the world or make it more entertaining.
One of the earliest major attempts at bringing augmented reality to fruition was Google Glass. Google Glass was introduced as "Project Glass" in 2012. While the project gained traction fairly quickly, it also encountered numerous legal roadblocks related to privacy and intellectual property in the early years, and Google took much of the project behind closed doors in 2014-2015, shutting down a handful of "basecamp" retail locations where potential customers could go to try out Glass, discontinuing the B2C-focused Google Explorer program, and focusing more heavily on their B2B Google At Work product for enterprise users.
Smart glasses are one way to augment reality. A more forward-thinking approach to AR, and one that will undoubtedly gain attention in the near future, is smart contacts lenses. One company, Mojo, seems to be the one of the leaders in this nascent industry. Mojo "seamlessly fuses digital information onto the world around us ... using microelectronics and a tiny display to share critical information". Mojo calls their approach "Invisible Computing" to emphasize that using contact lenses as a visual display means that AR never gets in your way. You can enrich the world around you with useful information without wearing glasses, a headset, viewing a smart phone, or using some other device.
While augmented reality is all about modifying video to enrich the "real" world, virtual reality is more about taking you into an entirely new artificial one -- a world based on 3D graphics objects and scenes, completely designed by humans. For a good historical overview, read Virtual Reality Society's History of Virtual Reality.
One of the earliest examples of a head-mounted display (HMD) was the Telesphere Mask, which was patented in 1960. The headset provided stereoscopic 3D and wide vision with stereo sound, but lacked many of the features you find in modern HMDs, such as the ability to interact with the virtual medium via motion tracking.
These days, most people who have experienced VR did so via Oculus, Playstation VR, or HTC Vive. Most VR applications that have gained any real traction over the last decade relate to massively parallel online gaming, where dozens or hundreds of people join up in a virtual world to play a game of some kind.
Tools for building AR/VR applications have come a long way over the last decade. Development platforms like Unity and Unreal Engine can be used to build everything from mobile and desktop games to multiplayer games for Playstation and XBox. These development platforms are not easy to learn, however, and the code they generate is generally built for a specific operating system, such as iOS, Android, Windows, or MacOS. It wasn't until very recently that it became possible to build fully immersive AR/VR applications that run in the web browser and can integrate with HMDs, sensors, and other AR/VR hardware.
Graphics rendering alone is not enough to create AR/VR experiences, however. The WebXR Device API working draft was recently published by the W3C. This specification describes support for accessing virtual reality (VR) and augmented reality (AR) devices, including sensors and head-mounted displays, on the Web.
The reason these innovations are so disruptive is simple. Almost everyone using a computer these days spends most of their time on the World Wide Web, yet the Web is still a fairly boring place. It's a completely 2D environment without much social interaction. In the future, you'll be able to visit a website and experience it in full 3D. You'll be able to go shopping with your friends and family in VR. You'll be able to dance in virtual dance clubs while wearing a headset and body sensors, or sing karaoke at a virtual bar. The web will seem more realistic and it will be social. You'll be able to experience it rather than merely browse it as you do today. This new web experience will look just as real as PlayStation or XBox games do today, and your connection to it will start with HMDs and body sensors and gradually extend to smart contacts, tactile sensors, and ultimately brain-machine interfaces (BMIs). It will be a completely new universe running inside of our existing one, with its own financial systems, markets, and economies.
If we dare to dream, we might even envision a future world where people who die due to some illness unrelated to the brain can continue living on for a time in this new virtual universe if their brain remains connected to a brain-machine interface (BMI), such as Elon Musk's Neuralink, and the proper medical equipment to keep the brain functioning. Those of us who are still "alive" in the "real" world can periodically drop in to VR and engage with them, the same way we did before they passed. If we ultimately learn how to digitize the human brain and the consciousness that emerges from it, such a virtual life might go on indefinitely.
We are heading into some strange times indeed!
In the second part of this article, we'll explore four more highly disruptive innovations that are also poised to change the world. Stay tuned!