A History of Cybersecurity and Cyber Threats

When asked when cybersecurity started, many of us will instantly think of the Internet. Or more specifically, how connecting to it opened the door to computer viruses (and antivirus software) and cyber attacks. But did you know the cybersecurity industry has been growing since the 1940s?  

Even when there were not even networks, theorists were indeed already preparing for the risks that would come with the advancement of technology.

In this article, we will explore the cybersecurity history and its evolution—from the time of the first computer threats to the rise of risks amplified by artificial intelligence and cloud computing. And we’ll keep updating this post as time advances to make sure it’s current with the times.

First Things First: What is Cybersecurity?

Let’s level set with some cyber security facts. The term cybersecurity refers to the practice of protecting computer systems, networks, programs, and data, specifically from digital attacks, unauthorized access, damage, or theft. Cybersecurity also has to do with the processes and technologies that assist in this endeavor.

There are different components to cybersecurity, including but not restricted to:

  • Network Security: Primarily, the protection of networks and their infrastructure from unauthorized access, attacks, and disruptions. Examples: firewalls, intrusion detection and prevention systems, as well as virtual private networks (VPNs).
  • Information Security: Focuses on protecting the confidentiality, integrity, and availability of an organization’s data. Examples: implementing measures like encryption, access controls, and data backups.
  • Application Security: Addresses the security of software and applications by identifying and mitigating vulnerabilities that could be exploited by attackers. Examples: regular software updates and patches.
  • Endpoint Security: The securing of individual devices like computers, smartphones, and tablets. Examples: antivirus software, endpoint detection and response or EDR solutions, and mobile device management or MDM tools.

Cybersecurity—which you may have guess is an incredibly broad term—also includes user access to systems and networks, security awareness training, incident response and management, and security governance.

Cybersecurity History and the Evolution of Cyber Security Threats

From the early days of computing, when security was a barely an afterthought (if a thought at all), to the modern era of sophisticated cyber threats, let’s explore milestones, paradigm shifts, and the now familiar cat-and-mouse game between security professionals and malicious actors that’s pretty much a constant struggle.

1940s: Viruses as a Theory

Our cybersecurity story begins in the 1940s. To be more precise, 1945, when the first general-purpose electronic digital computer (ENIAC, or Electronic Numerical Integrator and Computer) was released.

Now, these machines were large, noisy, and, more importantly, very rare. They were primarily used for scientific and military calculations. And there were, in fact, just a few around the world. And, if we’re being honest, most people didn’t know they even existed.

Although networks were still unavailable in the 1940s, though, many people still hypothesize what they could look like in the future. So, while we have no connections between equipment yet, we do have theorists like John von Neumann, who was already thinking about what a virus could look like. His main idea? The notion that there could be some sort of mechanical organism able to copy itself and spread to new hosts. His paper, called Theory of Self Reproducing Automata, would not be published until the 1960s, but the seeds of virus theory were there 20 years earlier.

1950s: Phone Phreaking and Mainframes

The 1950s is when we started seeing the first examples of hacking. Specifically, something called “phone phreaking” (not as fun as it sounds) or the ability to hijack telephone protocols. The goal was, mostly, to allow people to make cheaper or no-cost calls (we’re jumping ahead a bit, but did you know that in the 1970s, it’s been pretty widely reported that both Steve Wozniak and Steve Jobs—founders of technology giant Apple—basically started the company following a successful run building and selling phone phreaking equipment?)

This 1940s era, however, marked the advent of modern computing. This is where we see the emergence of large mainframe computers and the dawn of the digital age. These machines were expensive and primarily used for scientific and military purposes. Security measures were relatively basic, and the idea of networked computing was not widespread.

The focus during this period, in a way, was primarily on developing and refining computer systems rather than addressing security concerns. So, security measures were centered mainly around physical access to computers; you could reasonably lock a door and be fairly sure no one was going to tamper with a computer in that room. Computer systems were standalone units with limited connectivity. The idea of a connected network of computers, which later became the foundation of the internet, had not yet been conceived.

However, the ’50s saw the emergence of a few pioneering security systems, including user authentication through password systems and rudimentary access controls. As one can expect, however, these implementations varied widely among different computer security systems because there were no standardized protocols.

1960s: The First Hackers

During the 1960s—best remembered for the variety of social and cultural upheavals that were produced instead of technology— computers remained quite large and expensive. However, it’s this decade that actually gave way to the first hackers. Now, what hackers did in the ’60s was quite different from what they do today.

These earlier computer hacking attempts were mostly focused on gaining access to certain systems. For example, in 1967, IBM asked students to test drive their new computer. Through this process (something we typically refer to as “user testing” today), IBM learned about possible vulnerabilities. So, there was already a concern about security measures.

In the ’60s, the cyber threat landscape was in its infancy, though. And the notion of cyberattacks as we understand them today was not prevalent. The primary concerns were still focused on the physical security of the hardware and preventing unauthorized access to the limited computing resources. However, there were no standardized security practices, and the idea of cybersecurity as a distinct discipline had not yet emerged.

Still, the ’60s mark an important milestone in terms of cybersecurity strategy. As mainframe computers became more widely used for business and scientific applications, security challenges associated with these systems began to surface. And, as the years progress and computers become smaller and cheaper, this focus only grew.

1970s: Origins of the Internet and Catch Me If You Can

For many, the 1970s was a time full of disco, presidential scandals, and bell bottom pants. For others, it’s the decade  when the cybersecurity industry really started. And this should come as no surprise, as the Advanced Research Projects Agency Network (or ARPANET) was created in September 1969. At the turn of the decade, then, we witnessed the birth of the world’s first operational packet-switched network through ARPANET, which stood as the foundational basis for the Internet.

The goal of ARPANET was to facilitate communication and resource sharing between researchers and institutions. While there has been speculation ARPANET was prompted by the need for a reliable and decentralized system that could withstand nuclear attacks, that is false.

“The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim,” former ARPANET director, Charles Herzfeld, told Ars Technica. “To build such a system was, clearly, a major military need, but it was not ARPA’s mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.

The 1970s is also the time when we truly see a computer virus. It was created by a man called Bob Thomas, who developed a computer program that could move over ARPANET’s terminals carrying the message “I’M THE CREEPER: CATCH ME IF YOU CAN.”

In response, another man named Ray Tomlinson developed a new program, Reaper, to catch Creeper. In other words… the first virus and anti-virus were created in this decade.

Avast Software - Flashback Friday! We present the program that started it  all: Creeper 🦠💻 Creeper was created in 1971 by Bob Thomas and designed to  move between DEC PDP-10 mainframe computers

These developments showed how essential security solutions were becoming. Government agencies suddenly had to discuss ways to mitigate these risks, as any unauthorized access could have severe consequences. This is how projects like ARPA (the Advanced Research Projects Agency, a division of the U.S. Defense Department) were born.

What’s important to mention at this point, though, is that the different organizations working on security flaws also began focusing on operating systems and software. And, at the end of the decade, we can see exactly why.

In 1979, on a dare, a 16-year-old Kevin Mitnick managed to hack into the Ark, the computer system operated by Digital Equipment Corporation (DEC) for the development of its RSTS/E operating systems, and made copies of the software. Mitnick actually carried out one of the first, if not the very first social engineering attack, by tricking the administrators of the Ark to give him employee credentials. He was later the first cybercriminal to be arrested.

1980s: Networks, Viruses, and Malware

The 1980s—filled with glam metal, the rise of hip-hop music, and Reaganomics—was definitely the decade of high-profile attacks, both against private companies and government systems. It is also the time when computers became personal. So, there was a marked shift towards a diversity of systems.

Many computers in the 1980s used the Bulletin Board Systems (or BBS). This popular system allowed users to connect their personal computers to a host system via a modem. So, as sharing became easier, security challenges exploded. This is where we see notable examples of viruses and malware, including the Elk Cloner virus (1982), which targeted Apple II computers, and the infamous Brain virus (1986), which affected IBM PC-compatible systems. Then there was the Morris Worm, one of the earliest instances of widespread malware, was also unleashed in 1988 by Robert Tappan Morris.

The Domain Name System (or DNS) was also introduced in 1983, simplifying the process of navigating the internet. Passwords were a primary means of access control, but security practices were still often lax. The story was slightly different for government agencies, which recognized the importance of securing information systems. In 1983, for example, President Ronald Reagan issued National Security Decision Directive 145, which focused on securing telecommunications and computer systems.

The 1980s also gave us the very first ransomware attack, dubbed (cruelly, given the epidemic at the time) AIDS or Aids Info Disk, which was delivered via floppy disk to attendees of the World Health Organization’s AIDS conference. The program was created by Joseph L. Popp, who was arrested and charged with multiple counts of blackmail.

If we had to sum up the 1980s in a few words, we would say it was a period of innovation, exploration, and early connectivity. Individuals began truly exploring and exploiting vulnerabilities for the sake of curiosity or challenge. But cyber espionage was also born during this time. For example, a German man called Marcus Hess used ARPANET to infiltrate government systems in 1986. In just minutes, he had access to more than 400 military computers.

1990s: The Internet Ascends

Along with the birth of grunge music and “The Simpsons,” this decade is also marked by the continued rise of personal computing, combined with the explosive growth of the Internet – and, in turn, the cybersecurity industry.

The 1990s saw a continued evolution of digital technologies as well as the commercialization of the web. But as we mentioned, as Internet usage proliferated so did cybersecurity challenges.

For example, the popularity of IRC and online communities like America Online (AOL) gave rise to new forms of cyber threats, including unauthorized access, social engineering, and distributed denial-of-service (DDoS) attacks.

Microsoft Windows, which had become the dominant, mainstream operating system for personal computers (how else could Microsoft get some “Friends” stars to host their user guide for Windows 95?), also got increased attention from cyber attackers.

Due to the popularity of Windows, malware, such as viruses and worms, mostly targeted those systems. In response to it, many companies began developing firewalls as a defense.

Founded in 1990, the Electronic Frontier Foundation became a prominent organization advocating for digital rights and civil liberties. EFF played a key role in addressing legal issues related to cybersecurity and pushed some important discussions about the need for regulations and legislation to protect personal information to the forefront. And EFF’s work continues to this day.

Naturally, governments and law enforcement agencies around the world also began establishing electronic crime task forces to address cybercrime. These initiatives aimed to investigate and prosecute individuals engaging in malicious activities online. And, as the year 2000 approached, concerns about the Y2K bug led to increased scrutiny of computer systems worldwide.

In short, the 1990s set the stage for cybersecurity as a critical field. We saw organizations and governments recognizing the need for proactive measures to secure digital assets and infrastructure. So, in other words, the ’90s laid the groundwork for the complex and rapidly evolving cybersecurity landscape of the 21st century.

2000s: Breaches, Clouds, and All the Mobile Devices

The 2000s—years marked by the rise of the iPod, conflicts in the Middle East, and the Great Recession—saw a widespread adoption of broadband internet, which lead to increased connectivity.

While this meant we had access to a faster and more reliable internet, it also expanded the attack surface for cyber threats. Add to that the growth of e-commerce and online banking, and you can see how there are now several new cybersecurity challenges, including the theft of financial information, identity theft, and phishing attacks targeting users’ sensitive data.

The 2000s also witnessed a surge in sophisticated malware, including worms, viruses, and trojans. Notable examples include the Code Red and Nimda worms, which exploited vulnerabilities in Microsoft Windows systems.

As mobile devices became more prevalent, concerns about mobile security grew, too. The spread of mobile malware, unauthorized access, and data breaches on smartphones became notable cybersecurity issues.

At the same time, the adoption of cloud computing introduced new security challenges related to data protection, access controls, and the shared responsibility model.

During this time, we also saw some high-profile data breaches, such as those affecting major corporations and government agencies. Incidents like the TJX data breach (2007) underscored the need for robust cybersecurity measures.

Now, the demand for cybersecurity solutions led to the emergence of a robust cybersecurity industry. Companies specializing in antivirus software, firewalls, intrusion detection systems, and other cybersecurity tools proliferated in the 2000s.

In short, the “Aughts” or “two-thousands,” paved the way for continued innovation and adaptation. 

2010s: New Laks, More Destruction

The 2010s witnessed a significant evolution in the cyber threat landscape. This is a decade of even more notable cyber-attacks and paradigm shifts, and the transition from simple data theft to actual physical destruction. 

One such example was the Stuxnet worm, which marked a new era in cyber warfare (Stuxnet is believed to be a joint U.S.-Israeli operation and it targeted Iran’s nuclear facilities).

The period also saw the Snowden leaks, which exposed a global surveillance network. This led to an increase in cyber-espionage and the development of foreign intelligence-gathering efforts by various countries. 

Another trend that marked the decade was the emergence of new cybersecurity trends and technologies. Artificial intelligence (AI) and machine learning (ML), in particular, began to be used for cyber defense – with AI being employed to recognize and prevent malware and to initiate recovery processes, too. 

Major data breaches, financially-motivated cybercrime, and destructive malware rendered many systems unusable in the 2010s. The decade also saw a significant increase in cyber-espionage operations and destructive cyberattacks, which highlights the continued need to combat these challenges. 

Cybersecurity Today

In the last few years, cyber threats have become more sophisticated. The rise of advanced persistent threats (APTs) and nation-state-sponsored cyber operations all employ advanced techniques that can breach systems and evade detection. Unfortunately, ransomware attacks, have surged, too, posing significant challenges to businesses, government entities, and critical infrastructure. We’re also dealing with things like deepfake technology, which is next-level social engineering attacks.

Critical infrastructure, including energy, healthcare, and transportation systems, is increasingly targeted as well, while the integration of artificial intelligence (AI) and machine learning (ML) technologies in cybersecurity has become more prevalent. These technologies enhance threat detection, automate response mechanisms, and provide proactive defense against evolving threats. Add to that the widespread adoption of cloud computing, and you can quickly see how ensuring security needs to remain a top priority.

The future promises even more intricate challenges, from the proliferation of the Internet of Things (IoT) to the potential impact of quantum computing on cryptographic protocols. So, we have to remember the lessons of the past so we can have a truly proactive and collaborative approach to cybersecurity. One that involves constant vigilance, education, and the development of innovative solutions.

Looking ahead

It’s evident that the cybersecurity industry has undergone a profound transformation through the years. The early days, marked by the birth of ARPANET, was a relatively small and secure community, more or less oblivious to the challenges that would unfold. As the internet expanded and became an integral part of our daily lives, though, so did the complexity and magnitude of cybersecurity threats.

In the 1990s, we saw the commercialization of the internet, while the 2000s brought about the rise of e-commerce, mobile devices, and advanced persistent threats. Each of the eras we have covered in this article has contributed its own chapter to the story of cybersecurity. And, with each leap forward, new vulnerabilities emerged.

Today, the landscape has become more complex than ever, with sophisticated attacks requiring equally sophisticated defense strategies. Luckily, cybersecurity doesn’t need to be so complex. If you’re looking for a way to get all the security you need in a single, accessible platform, you need Coro. Our cybersecurity modules are designed to integrate within one platform with one interface, one endpoint agent, and one data engine. Learn more