Taking a closer look at badBIOS

This analysis is available as a PDF or an ePUB.

DISCLAIMER: The contents of this analysis are meant as a critical review of data currently available on badBIOS, and are not intended to disparage in any way, the author of the badBIOS claims.


Stuxnet was a game-changer. In the words of one researcher, it marked “a clear turning point in the history of cyber security and in military history as well.” Even though some of the propagation methods employed by Stuxnet, taken in isolation, may have been known to be theoretically possible at the time, it was the first public record of an attack of such complexity against a target of substantial value. The version of Stuxnet that gained public exposure “represents the first of many milestones in malicious code history – it is the first to exploit four 0-day vulnerabilities, compromise two digital certificates, and inject code into industrial control systems and hide the code from the operator

The scope of the Stuxnet development and execution effort also had another, perhaps unintended, consequence: it lowered the bar on what would be considered a plausible cyber attack. If the Stuxnet authors were able to physically break into two companies and steal signing certificates in order to sign the malicious binaries, the reasoning went, or exploit four zero-day vulnerabilities, then what else could they do? Did anything lie outside their reach? After Stuxnet, no idea was too outlandish. Could malware jump over an air gap? Stuxnet did. Can systems infected with a specific strain of malware communicate through a means other than traditional wired or wireless networks? Flame did. What then, is expected behavior? What is the baseline? As Dan Geer recently pointed out, “everywhere you look, cyber security practitioners are trying to get a handle on “What is normal?” so that that which is abnormal can be identified early in the game.” It is against this backdrop of behavioral uncertainty, that the claims made about the badBIOS malware should be weighed: any unexplained operation or response observed on an isolated or networked group of computers, can be taken to be the workings of malware. Because as Stuxnet, and its cousins Flame and Duqu showed, malware authors are several years ahead of what is commonly believed to be possible.


In mid-October of this year, Dragos Ruiu, a security enthusiast well known within the infosec community, used social media channels to make a startling announcement: his home-office systems were infected with an advanced type of malware which appeared to infect the systems at the BIOS level, have the ability to jump air-gaps by communicating over the 35kHz audio range, and infect different operating systems (OSs). Ruiu called the malware badBIOS, and started sharing snippets of information relating to its operation on Twitter, Facebook, and Google+. Due to its advanced nature, Ruiu explained, it was not immediately possible to obtain samples. For the month following the disclosure, the infosec community was abuzz with discussions about badBIOS: was it real, or was it a hoax? Was this the next Stuxnet? As we will see, once the scientific method is applied to Ruiu’s assertions, the answer is self-evident.



Dragos Ruiu is a familiar name within the infosec community. He is best known for being an organizer of the Pwn2Own security challenge, and the CanSecWest and PacSec security conferences. His resume states that he worked as a C programmer, and later as a VAX administrator, and an Embedded Systems programmer. It also states that he was an original member of the Honeynet Project, and also a founder member of SourceFire, a company best known for its network security software and hardware. He is also listed as one of the Major Contributors to Snort, an open source Network Intrusion Detection system (NIDS). In addition to this, Ruiu authored two books (Testing Digital Video in 1997, and Testing Digital Video-Discover the Latest Techniques and Products for Real-World Video Testing, also in 1997) and also co-authored two books (Broadband Testing Technologies, B-ISDN Primer in 1993, and Deploying Snort 2. 0: Maximizing and Extending Intrusion Detection in 2003). In spite of this, there appears to be no reference to any specific security research conducted by Ruiu, presentation made by him at a security conference, or forensic or malware analysis undertaken by him.

When releasing information about badBIOS, Ruiu shared his observations progressively and without much technical detail. A sizable portion of the information available on badBIOS was released by Ruiu on Twitter, 140 characters at a time. What this means is that there is no authoritative analysis of the technical details of badBIOS that may be comparable to Symantec’s W32.Stuxnet Dossier. Graham and Jaenke are the ones that come closest to providing a technical analysis. Analyzing the primary information made available by Ruiu therefore, involves a process of reviewing a large number of sources, each with a small amount of information, and each with little accompanying context.

Compiling the data released on Twitter, Facebook, Google+ and on one podcast interview pertaining to badBIOS, it is nevertheless possible to put together a list of forty (40) distinct claims that are made by Ruiu regarding badBIOS. These are listed in their entirety, in Annex B, including reference quotes and links. For the sake of brevity, Table 1 below only includes a subset of ten (10) claims. The same process used to test the validity of these ten claims, can be applied to the remaining thirty.  The numbers in Table 1 correspond to the row number in the complete list in Annex B.



The method of analysis that accompanies the assertions made by Ruiu are characteristic of instinctive thinking: the claims are unsubstantiated, assumptions and conjecture are combined with facts, and unexplained behavior is taken to be the work of advanced malware. Even though Ruiu notes that settling on the idea that an advanced form of malware was behind the unexplained behavior was “about the second last theory we tried,” the primary sources on badBIOS lack alternative hypotheses, and contain several examples of cognitive distortions. Ten assertions are analyzed below. The numbering corresponds to the row in the list of forty assertions presented in Annex B.

A common theme that is present throughout the first-hand reports of badBIOS operation, is the assumption that the malware exists. That the unexplained behavior is attributed to an advanced form of malware, is presented to the public as a fact. Ruiu mentions having looked at other alternative explanations early in the discovery of this malware, but does not share those hypotheses or discuss them with his readers. The assertions are presented in rapid succession, and without supporting evidence. This reduces the assertions to a series of anecdotes and eyewitness accounts. Further, Ruiu doesn’t explain why these assertion are taken to be true. These assertions should only be looked at as a list of questions that should characterize the start, not the end, of an investigation.

Assertion 1: OpenBSD systems infected with badBIOS can unexpectedly enter single user mode.

Reference: “we saw all kinds of weird effects, and things like OpenBSD boxes in single user mode, changing files while they’re not supposed to be on the network, so there’s all kinds of strange things.”

Evidence: Ruiu does not provide corroborative evidence to back this claim. No evidence is provided to explain how many OpenBSD systems entered single-user mode, or whether this was a repeatable or one-time occurrence. By making this assertion, Ruiu is making an inference based on very little data.


  • Confirmation bias: In making this assertion, Ruiu has concluded that badBIOS is real, and that the OpenBSD systems are infected with it. Unexplained behavior like the system entering single-user mode provides a confirmation to, and in a way proves this conclusion.
  • Observation bias:Although the claim is that the OpenBSD system entered single-user mode on its own accord, there is no contextual information to indicate what other factors may have contributed to it entering single-user mode. Were there any error messages in the system logs? Did any applications enter an error condition? Did any other OpenBSD systems on the network enter single-user mode? What are the differences between the systems that did enter single-user mode and those that didn’t? The available information shows that the observer saw one or more systems in single-user mode, and took that as proof of the existence of badBIOS.
  • Certainty bias: In this case, making the assumption that this behavior is attributable to the badBIOS malware is an indication of the need for a simple explanation to what could be a complex problem. A system rebooting and entering single-user mode can be the result of the malfunction of several components. The analysis of system and application logs is often a starting point, to determine the underlying cause of the anomaly was.   

Notes: Unix operating systems like OpenBSD have five stages, or ‘runlevels’ that comprise the bootstrap process. Different tasks or operations are undertaken at each runlevel, and a process or subsystem may start at one runlevel and be shut-down at another. Run-level 1 is referred to as ‘Single user mode’ referring to the fact that this runlevel does not support a multi-user environment. Only the superuser (or administrator) account, referred to as the ‘root’ account in Unix can log on to the system when the machine is in this runlevel. Single- user mode is also a stage where other services or processes have not yet started; it is a stage reserved for troubleshooting and administrative tasks, and since networking services are not active at this runlevel 1, it requires operator interaction at the system console.


  1. A Unix system entering single-user mode is not unusual. A Google search for “OpenBSD stuck in single-user mode” provides a view of the variety of reasons an OpenBSD system can enter single-user mode. The same holds for other flavors of Unix and Linux.
  2. The reference does not indicate whether the system rebooted and entered single-user mode, or whether it entered single-user mode without rebooting. Both are possible, and the root causes of each differ.  
  3. The reference also notes that Ruiu observed files changing on the infected machine when it was not connected on the network. From comments made by Ruiu elsewhere, it is likely that this is a reference to files Ruiu and his team observed being written to and read from, by looking at the SysInternals process monitor (procmon) output. This too is not unusual. As a commenter noted on a subsequent Twitter update from Ruiu, “Asking “Why does it do X” is probably an inefficient way to proceed. Uninfected computers do a lot of weird things.

Assertion 6
: Deleting registry keys that badBIOS is using, results in the malware entering an error condition.

Reference: “at one point we saw a whole bunch of registry keys as they’re accessing them, so we deleted them … And you could see that their stuff sputtered and then I guess that was the point at which it required operator intervention. And we start seeing more intelligent responses from these things.”

Evidence: Ruiu does not provide evidence to back this claim. What is also absent is detail regarding the exact steps that were followed to generate this outcome. Detailed steps would make it possible for others to try to reproduce the same behavior. In this case, Ruiu is reaching a conclusion based on very little data.


  • Self-serving bias: This is an example of seeking supporting evidence for the claim of the existence of badBIOS, as doing so also confirms related assertions.
  • Jumping to conclusions bias: Once the bar for confirmatory evidence has been set low, we can be persuaded to reach a conclusion based on any evidence, irrespective of the strength of its argument. Belief in the evidence and in the conclusion make it difficult to accept that the method in which this assertion was accepted, is wrong.  
  • Framing bias: This can also be an explanation of reaching incorrect conclusions because the problem (or assertion) is not framed correctly or presented accurately. In this case, the assertion is that one or more processes tied to the badBIOS malware “sputtered” once a number of registry keys were deleted. Framing this in a technical context — stating for example that the process iexplore.exe outputs error messages and terminates once registry keys X, Y, and Z are deleted, would make it easier to disprove. In its current form, the assertion is framed as a general observation.

Notes: The Windows registry, according to Russinovich, Solomon, & Ionescu (Windows Internals Part 1) is “the systemwide database that contains the information required to boot and configure the system, systemwide software settings that control the operation of WIndows, the security database, and per-user configuration settings.” Attempting to edit Windows registry keys carries with it the caveat that “you must exercise extreme caution; any changes might adversely affect system performance or, worse, cause the system to boot successfully.” It appears that Ruiu’ intention was to elicit some kind of response by deleting “a whole bunch” of registry keys from a running Windows system. This strategy may work in a well controlled setting, where the values being deleted are known, and documented, and where there is a list of likely outcomes. Deleting the registry keys in that scenario could be a method of testing likely outcomes. Indiscriminately deleting registry keys however, and reporting on the outcome does not help strengthen the assertion.

Observations: This behavior, as described, should be repeatable. The assertion that deleting registry keys causes the badBIOS malware to ‘sputter’, implies that the operation being observed — the write operations to files on disk, and read and write operations to specific registry keys — is being undertaken by the malware. Since there is insufficient evidence to prove the presence of malware, it is unclear what is entering an error condition — what is sputtering? What registry keys were being written to? Which were erased?

Assertion 12: An audio recording of badBIOS communication shows a pulse pattern in the 35kHz range; approximating 128 bits per second.

Reference:So we made some recordings, and a colleague of mine has been looking at it, and he’s found lots of odd spectral artifacts at around 35kHz, and when you zoom it out, it looks like it’s a repeated bit pulse pattern at that rate, something like 128 bits per second, which coincidentally enough seems to match the bit rate of the packets that we’re seeing on the procmon traces, we’re seeing 8-byte payloads, and changing the packets once every one to two seconds, about the same bit-rate.”

Evidence: Ruiu provided an audio recording which shows a signal in the 35kHz range, and noise around the 40kHz range.


  • Confirmation bias: In this case, the signal at 35kHz is taken as confirmation of the existence of badBIOS malware, instead of triggering a need for an alternate explanation. This new information is also interpreted in a way that confirms Ruiu’s formerly stated conclusions.
  • Illusory Correlation bias: This is an example of believing in one’s ability to find causal explanations based on visual clues, and in identifying patterns that support stated beliefs. 

Notes: The range of human hearing is 20Hz – 20,000Hz. The image depicted on the cover page of this report is that of the signal alluded to be Ruiu, which can be observed at the 35,000Hz mark. The signal was further analyzed and “slowed down 20x” to produce the image below. The pulse pattern Ruiu refers to, is the pulse that can be observed in this image.


Observations: A signal can indeed be observed at the 35kHz mark, however there is no evidence to suggest what could be generating it, and one cannot rely on a visual analysis of the waveform to conclude what it could represent. A Twitter user following the discussion for example, suggested that the signal can be generated by an electronic component present in Ruiu’s system, similar to a NCP1729, a Switched Capacitor Voltage Inverter that operates at 35kHz. In a related tweet, Ruiu expressed a sentiment that becomes characteristic of the claims made of badBIOS, he notes “slowed down it sounds like a modem. As I said I’m not doing analysis, looking at results. But it’s looking pretty viable.”

Assertion 16: The badBIOS audio link is encrypted.

Reference 1 2: “We haven’t captured the downloads, or gotten any of these modules, it’s all-encrypted” and “nope it probably wasn’t ipv6 over audio, it was seemingly 12byte packets, 8bytes encrypted payload.”

Evidence: Ruiu provides no evidence for this assertion.


  • Assertion bias: This is a prime example of making an assertion to imply analysis. Ruiu asserts that the link is encrypted, and that assertion is taken as fact.
  • Self-serving bias: In this case, an explanation is provided in support of (and in line with) other badBIOS claims, in the clear absence of evidence. Not only is there no evidence of the audio link being encrypted, there is also no evidence of the existence of the audio link. Making this assertion simply helps strengthen the case for the existence of badBIOS.
  • Evidence bias: This is also an example of accepting confirmatory evidence with little assessment — critical or otherwise.  

Notes: Ruiu takes the signal at 35kHz to denote data communications between systems infected with badBIOS, and then asserts that the signal is encrypted — and offers detail of the packet format: it contains a 4-byte header and an 8-byte encrypted payload.

Observations: As noted by the reference, Ruiu explains that the his team has not captured data showing what badBIOS is downloading onto the system, nor obtained the modules that badBIOS has pulled as part of its exploit kit; and in the same sentence makes the claim the communication is encrypted.

Assertion 21: A USB disk that has been inserted into a machine infected with badBIOS, can cause a clean system to reboot.

Reference: “So we left it in there for a little bit, and looked at the files, and pulled it out, and reinserted it into one of the forensic systems. The message came up in the console, and then in about a minute or so the system just rebooted itself spontaneously, after plugging this thing in.”

Evidence: Ruiu provides no evidence for this assertion.


  • Confirmation bias: In this case, the observed behavior is taken as proof or confirmation of the presence of badBIOS, instead of looking for clues that would invalidate this assumption and ensure it stands up to scrutiny.   
  • Closure bias: This is an example of the case where any explanation is preferable to no explanation. It is more comfortable to reach closure by attributing an explanation to this observed behavior, instead of dealing with the uncertainty of not knowing the root cause — or having to spend time and resources finding it out.

Notes: What Ruiu is depicting is a correlation between two events: plugging-in the USB disk into an uninfected system, and that system rebooting. However correlation does not imply causation: that the uninfected system rebooted a minute after the USB stick was plugged into it, does not mean that one event caused the other.

Observations: Although at first glance this assertion may appear alarming, it is provided without any context, making an accurate analysis difficult. An indication that the behavior was reproducible would help in this event. Since Ruiu refers to more than one forensic system (implying that there are at least two), the next question to ask would be: was the USB disk plugged into any of the other forensic systems, and did any of them reboot as a result?

Assertion 25: badBIOS is able to block access to internet sites, notably sites that contain technical information that pertain to badBIOS method of operation, infection. or propagation.

Reference: “and there was some blocking going on. One of the sites that we were trying to get to, and I don’t know if this was conscious on their part, but this is what triggered the thought chain from me, was a site called flashboot.ru.” Coincidentally Russian flash controller reflashing software sites are blocked on infected systems, and laptop bios downloads 404 #badBIOS”

Evidence: Ruiu does not provide any evidence to support this assertion.


  • Observation bias: This is an example of reaching a conclusion based on a mis-interpreted observation — instead of subjecting the observation to critical analysis.
  • Freezing bias: In this case, Ruiu has reached a conclusion that provides an explanation which is in line with his assertion of the existence of badBIOS. No further explanation is sought. If badBIOS was blocking access to specific internet sites, or proxying their internet-bound traffic, there would be a noticeable change in speed, as traffic would have to be inspected by or filtered through an additional hop. A next question to ask would be: has there been a noticeable and measurable decrease in their internet browsing speed? 

Notes: As per RFC 2616, error code 404 (Not Found) should be used by the server when it is unable to fulfill a request by the client, but does not wish to make known the reason for the refusal.

Observations: Little context is provided with this assertion, and troubleshooting steps are absent. Connecting a switch to the network and capturing the client-server communication would determine whether the traffic is being sent out to the internet, or proxied by an infected machine. Trying to access the site from another device — such as a mobile phone— would indicate whether the problem is local to the network on which badBIOS systems reside, or whether the problem lies with the flashboot.ru site. Attempting the connection over an extended period could determine whether the site was undergoing maintenance, experienced an outage, or if a flashboot.ru site administrator inadvertently broke the link Ruiu was attempting to access.

Assertion 26: badBIOS operators removed the malware from some infected boxes once there was public conversation regarding its method of operation, infection, and propagation.

Reference: “when we start talking about the USB and the ultrasound thing, it’s the first time we’ve ever seen them back off, so they actually pulled out of a bunch of boxes as soon as we started sniffing down that road, and so that’s been a really positive thing.”

Evidence: Ruiu does not provide any evidence to support this assertion. One could also ask the question: What evidence could Ruiu provide to back this claim? Since no evidence of the existence of badBIOS has been provided, it is difficult to prove the absence of that evidence.


  • Confirmation bias: In this case, Ruiu has observed unexplained behavior in some of his systems, and then stopped observing the unexplained behavior after going public with the assertions of the existence of badBIOS. Instead of seeking to find explanations for the unexpected behavior, he is identifying a causal relationship while confirming by implication, his belief in the existence of badBIOS. 
  • Evidence bias: This is an example of having a low threshold for what constitutes evidence. It is an unfounded assertion that results in a self-fulfilling prophecy. 
  • Assertion bias: This is an example of how something is taken to be a fact, simply because it has been asserted: analysis by assertion. 

Notes: Proving this assertion, would be akin to trying to prove a double negative while implicitly assuming the positive: first trying to prove the absence of badBIOS from previously-infected systems — on which the presence of badBIOS has not yet been confirmed, and then trying to prove a causal relationship for its absence.

Observations: At this point, not enough information has been provided to prove the presence of badBIOS. Trying to prove its absence, or establish a causal relationship, would require proof its presence as a prerequisite.

Assertion 34: badBIOS uses TTF font files on Windows as part of its attack kit. In infected Windows systems, additional .ttf and .fon files can be observed – three of them (meiryo, meiryob, and malgunnb) have a size that is bigger than expected.

Reference: “Now suspect font files as a part of the infection vector. Found 246 extra ttf files, 150 fon files in addition to unusual DLLs #badBIOS”

Evidence: Ruiu made available an archive containing what appear to be the file contents of a Windows system. Included in this archive are the font files that he refers to in his assertion. In addition to this, Ruiu shared a link to the output of a utility that dumps the contents of TTF files (True Type Font File Dumper), with the caption “is this an TTF attack vector? table 63 of meiro.ttc.”


  • Observation bias: In this case, Ruiu has observed a file size for one font that is inconsistent with the file size of other font files on the system. A subsequent observation is that the a number of font files are not digitally signed. These observations are taken to be corroborative evidence of the existence of badBIOS.  

Notes: 0xebfe.net. ran the sigcheck utility that is part of the SysInternals suite and found that 22 files out of 255 were not signed. Manually uploading a selection of these files to online antivirus resource VirusTotal.com, did not find known malware in any of them. The output referred to by Ruiu does show that the font meiro.ttc has a length several times that of all other fonts listed.

Observations: This finding can be characterized as inconclusive. It would however be grounds for a further review of the font files that failed the signature check, and specific analysis of the meiro.ttc file.

Assertion 36: badBIOS can disable the Windows registry editor.

Reference: “We had an air-gapped computer that just had its [firmware] BIOS reflashed, a fresh disk drive installed, and zero data on it, installed from a Windows system CD, … At one point, we were editing some of the components and our registry editor got disabled.”

Evidence: Ruiu provides no evidence for this assertion.


  • Confirmation bias: This is an example of discarding information that dismisses, invalidates, or contradicts the belief in the advanced capabilities of badBIOS. Instead of the well-known and well-documented warning and caveats that accompany editing the Windows registry, Ruiu is taking an error condition in the registry editor to confirm evidence of badBIOS.    
  • Closure bias: In this example the simple explanation that unexpected behavior is attributable to badBIOS brings closure to the problem. There is no need to spend more time on it: the answer is known, and it is in line with other assertions.

Notes: This assertion comes with very little accompanying detail. The behavior described however, is not necessarily anomalous. The registry hive is a database file, which can be left in an inconsistent state by an ungraceful shutdown, or corrupted by faulty hardware.

Observations: The assertion does not make clear what registry keys were being edited, what other changes were being made to the registry, or what characterized the registry editor’s disabled status. Did the registry editor enter an error condition and halt? Was it still functional but disallowed changes to keys? Was it functional but disallowed changes to the keys that Ruiu and his team had been writing to?  A number of troubleshooting steps could help provide clarity with respect to this claim. Since the registry keys are stored in a hive, which is a file that resides on disk (some hives reside in memory), a first step to determine the validity of this assertion, would be to check the permissions on the corresponding hive when the “registry editor got disabled.” Did the permissions change? Other troubleshooting steps could involve trying alternate registry editing tools to determine whether they exhibit the same behavior, or trying to edit the same keys via the Windows command line utility reg (reg add KeyName Value).

Assertion 38: badBIOS reflashes the system BIOS and is able to persist after the machine has been re-flashed with legitimate firmware.

Reference: “We’ve already found some persistent BIOS malware that survives re-flashing with it.”

Evidence: Ruiu does not provide evidence for this assertion.


  • Wishful thinking bias: This can be an example of interpreting a behavior in a certain way, because one has been primed to want it to be true — and the more one wants it to be true, the more confidence one has in the correctness of the assertion.  

Notes: In his detailed rebuttal of Ruiu’s claims, Jaenke refutes the idea of a portable hardware- independent BIOS, and explains that BIOS firmware is very hardware-specifc. He explains for example, that the same BIOS could not be burned (or ‘flashed’) on two motherboards from the same vendor, that do not share the same model number. He also goes on to state that modern BIOSes have very limited space (in the order of 6KB) of available space for a malicious payload.

Observations: Jaenke makes a strong case against this assertion, focusing on the difficulty associated with creating a ‘portable’ BIOS, as well as the justification provided by Ruiu for not having an infected BIOS dump to share. He writes, “If you are MISSING a block, you will fail checksum and fail a basic differential. If you have AN EXTRA block, you will fail checksum and fail basic differential. There is no ‘maybe’ no ‘if’ no ‘but.’ If it’s in the BIOS, it’s in a dump. If it’s in a dump, it’s detectable and extractable” (Jaenke, comment on Nov 1st, 2013 at 9:18 pm).


In order to analyze the claims presented by Ruiu, we can formulate alternative hypotheses, and test them against each of the claims made regarding behavior attributable to badBIOS. The process to follow is depicted in Figure 2.


The hypotheses against which each of the assertions can be tested, are:

  • H0: It is possible to rationally explain the behavior observed by Ruiu.
  • H1: The behavior observed by Ruiu is benign, the result of chance phenomena.
  • H2: The behavior observed by Ruiu is anomalous, the result of chance phenomena.
  • H3: The behavior observed by Ruiu is anomalous, the result of ordinary malware.
  • H4: The behavior observed by Ruiu is anomalous, the product of advanced state-sponsored malware.

In the interest of brevity, only two hypotheses (H0 and H4) will be tested against the first five assertions.

Assertion 1: OpenBSD systems infected with badBIOS can unexpectedly enter single user mode.

Alternative Hypotheses H0:
It is possible to rationally explain the behavior observed by Ruiu. In this case, the rational explanation is that a hardware or software component in the system entered an error condition due to faulty hardware or faulty software.

Likely Outcomes:

  1. If this is hypothesis is correct, there will be evidence in the system logs and application error logs indicating the reason why the system entered single-user mode.
  2. If this hypothesis is correct, the behavior should be reproducible on the same system, once the initial circumstances leading to the error condition are reproduced.
  3. If this hypothesis is correct, the behavior should be reproducible on other OpenBSD systems configured with the same hardware and software. 


  1. Review system logs and application error logs, and look for entries occurring before and during the time that the system entered single-user mode.
  2. Take careful stock of external factors that preceded the event, and reproduce them. For example, in Assertion 21, Ruiu indicates that connecting a USB disk into one of their forensic systems triggers a reboot. Was a disk also inserted into this system before it entered single-user mode? Was a software package being installed or updated on this system immediately prior to it entering single-user mode? 
  3. Conduct a system audit and record the modification, access, and change times (collectively referred to as MAC times) for all files on the affected system, and review files with MAC times immediately preceding the run-level change into single-user mode.

Alternative Hypotheses H4:
The behavior observed by Ruiu is anomalous, caused by advanced malware. An example of the behavior expected by advanced malware is the stealth near-undetectable operation characterized by Stuxnet.

Likely Outcomes:

  1. If this hypothesis is correct, then badBIOS operation should be covert. It should limit overt action or noticeable behavior, in order to remain undetected.
  2. If this hypothesis is correct, then there should be a benefit that badBIOS operators can derive from entering single-user mode; for example being able to carry out an operation that cannot be undertaken at run-levels 2-5. 
  3. If this hypothesis is correct, then entering single-user mode should aid badBIOS operators in accessing the target machine remotely, not impede them from doing so.


  1. A system that has entered single-user mode is noticeable: once in single-user mode, the system remains there until operator input at the system console changes the runlevel. This fails the stealth/covert operation test. If the system entered single-user mode as a result of malware malfunction, this should be evident by the log analysis covered by H0.
  2. Single-user mode is a runlevel at which certain core system services are disabled. If one requires to run commands at a runlevel where those services have been disabled, the requirement can be met by modifying run command scripts (rc scripts) that have been scheduled to run at the desired runlevel; it is not necessary to enter single-user mode. 
  3. Networking services are disabled in single-user mode, meaning that a remote attacker would lose access to the target machine, failing the remote access test. If the networking services have been enabled (therefore leaving the machine in a state no longer technically runlevel 1), then this would be identified by looking at the running services on the systems, which would fail the stealth/covert operation test.   

Assertion 6
: Deleting registry keys that badBIOS is using, results in the malware entering an error condition.

Alternative Hypotheses H0:
It is possible to rationally explain the behavior observed by Ruiu. In this case, the rational explanation can derive from an understanding of what the Windows registry represents, and caveats that accompany modifying its contents. As mentioned in the Notes section of the Instictive Analysis of Assertion 6 above, Russinovich, Solomon, and Ionescu (2012) characterize the Windows registry as a “the systemwide database that contains the information required to boot and configure the system, systemwide software settings that control the operation of Windows, the security database, and per-user configuration settings” and urge “extreme caution” when making changes to it.

Likely Outcomes:

  1. If this hypothesis is correct, then deleting a collection of registry keys corresponding to a Windows program, should result in the program entering an error condition on the target system.
  2. If this hypothesis is correct, then deleting a collection of registry keys corresponding to a Windows program, should result in the program entering an error condition on other similarly-configured Windows systems.

Although the name of the specific keys that Ruiu and team deleted is not made available, this outcome can be tested by:

  1. Running procmon and observing the registry keys being written to or accessed by any one program
  3. Deleting the keys.

Alternative Hypotheses H4:
The behavior observed by Ruiu is anomalous, the product of advanced state-sponsored malware.

Likely Outcomes:

  1. If this hypothesis is correct, then deleting registry keys corresponding to the malware should not result in malware malfunction, because the malware would have several watchdog processes that would re- insert the deleted registry keys or files, and ensure continued operation of the malware.
  2. If this hypothesis is correct, then disabling one component of the malware by deleting registry keys in use by it, should not result in an error condition for the malware, but rather only for the specific module using those registry keys. Advanced malware should be able to recover and bring the disabled component back to operation.   


  1. This hypothesis can be tested by trying to identify the watchdog process that re-inserts registry keys when they are deleted. The watchdog process can be short-lived, but it can be captured by a program that generates a process history, like procmon. This behavior was not reported by Ruiu.
  2. This hypothesis can also be tested by obtaining a memory dump of the infected system, and looking for the presence of short-lived processes that would not be present on a clean/uninfected system, using a tool like volatility. Short-lived processes that would fulfill this function, were not reported by Ruiu.

Assertion 12: An audio recording of badBIOS communication shows a pulse pattern in the 35kHz range; approximating 128 bits per second.

Alternative Hypotheses H0:
It is possible to rationally explain the behavior observed by Ruiu. The rational explanation is the presence of electronic components that operate at the 35kHz range.

Likely Outcomes:

  1. If this hypothesis is correct, then one component present within the range of the measuring instrument (like the NCP1729 Switched Capacitor Voltage Inverter that operates at 35kHz) would be able to produce the same pulse pattern at the same frequency.
  2. If this hypothesis is correct, then the signal should attenuate if there are obstacles in the signal path.


  1. This can be tested by conducting the measurement in an isolated environment, and accounting for the operating range of every electronic component within the range of the measuring instrument.
  2. This can be tested by moving the infected systems away from each other, placing physical obstacles (drywall) between them, and determining whether a signal attenuation is observed.

Alternative Hypotheses H4:
The behavior observed by Ruiu is anomalous, the product of advanced state-sponsored malware.

Likely Outcomes:

  1. If this hypothesis is correct, then a radio frequency (RF) engineer should be able to positively identify the pattern as a communications link.
  2. If this hypothesis is correct, then the an RF engineer should be able to identify the emitter’s radiation pattern and match it to what would be expected for that emitter.

Both (1) and (2) can be tested by designing a double-blind test, where two independent RF engineers are asked to separately measure and analyze the 35kHz signal, without being told what it is suspected of being. If both conclude that the 35kHz signal is consistent with that of a communications pattern, that would prove this hypothesis.

Assertion 16: The badBIOS audio link is encrypted.

Alternative Hypotheses H0:
It is possible to rationally explain the behavior observed by Ruiu. The rational explanation is that (a) the observed signal is not an audio link, and that it is therefore (b) not encrypted. The signal at 35kHz does not represent an ‘audio link’ between two devices, but rather an audio signal generated by an electronic component operating at that frequency.

Likely Outcomes:

  1. If this is hypothesis is correct, then analysis of the waveform should show no difference at the start of the signal, as throughout its lifetime.
  2. Since there is no audio input at the BIOS level (Jaenke), if this hypothesis is correct, then there should not be a delay between the time that the system starts up (thus powering the electronic component that transmits at 35kHz), and the time that the signal is observed. That is: the 35kHz signal should be observed immediately at system power-up. If there is a delay, other than what is specified by the electronic component’s technical specifications, then this hypothesis would fail.    


  1. Use a spectrum analyzer, power the infected system up, capture the audio signal for 5 minutes, then power the system down.
  2. Use a spectrum analyzer as in (a), and record the time that the signal appears. Contrast it with the time that the infected system was powered up, and entered the different phases of the boot-up process. 

Alternative Hypotheses H4
The behavior observed by Ruiu is anomalous, the product of advanced state-sponsored malware.

Likely Outcomes:

  1. If this hypothesis is correct, then the following stages should be observable by analyzing the signal produced by the infected machines from startup:
    i.   No link present.
    ii.  Link negotiation underway.
    iii. Link established.
    iv. Link teardown underway
  2. If this hypothesis is correct, then powering up other infected systems should introduce a variation in the audio signal, as the badBIOS audio protocol handles link negotiation with a new device while maintaining an existing audio link. This new-link negotiation should be observable in the audio signal every time a new infected device is introduced.  


  1. Use a spectrum analyzer, power the infected system up, capture the audio signal for 5 minutes, then power the system down. Periods of transmission and pause, would indicate link negotiation.
  2. Use a spectrum analyzer as in (a), and capture the audio signal for a minute, introduce a second infected device (take a note of the time), continue capturing the signal for another minute, then introduce a third infected system (and take a note of the time). Then power-down one system (take a note of the time), wait 30 seconds and power-down a second system (and make a note of the time). Analysis of the audio signal should show consistent changes every time that a new infected system was introduced (to account for a new link negotiation) and every time an infected device was removed (to account for link tear-down or retransmission to a dead peer, and eventual timeout). The presence of these changes would prove this hypothesis.

Assertion 21
: A USB disk that has been inserted into a machine infected with badBIOS, can cause a clean system to reboot.

Alternative Hypotheses H0:
It is possible to rationally explain the behavior observed by Ruiu. The rational explanation is that the system rebooted due to a system failure, related to either hardware or software.

Likely Outcomes:

  1. If this hypothesis is correct, then the system and application logs should contain a record of the error condition encountered by the system, which led to the reboot.
  2. If this hypothesis is correct, then the experiment should be repeatable: inserting the infected badBIOS USB disk into a similar system as the one that rebooted, should produce comparable results.


  1. In order to test this hypothesis, review the system logs, error logs, and application logs, and identify the operations that took place between the time that the USB stick was inserted into the machine, and the time that the reboot was observed. The presence of error logs identifying the cause of the reboot would prove this hypothesis.
  2. In order to test this hypothesis, insert the infected USB stick into two other clean systems, and note the outcome. In order to be better prepared, make use of dtrace on Linux or OpenBSD systems, and procmon on Windows systems, filtering for write events.i. If the neither the second or third system reboots, then the first reboot can be attributed to chance.
    ii. If only one of the two systems (the second or the third) reboot, then identify commonalities between the systems that rebooted (for example: same motherboards, same OS, etc).
    iii. If both systems (the second and the third) reboot, then identify commonalities between them: if the only commonality is the infected USB stick — and it is seen to consistently reboot dissimilar hardware and software platforms, that would cause this hypothesis to fail. 

Alternative Hypotheses H4:
The behavior observed by Ruiu is anomalous, the product of advanced state-sponsored malware.

Likely Outcomes:

  1. If this hypothesis is correct, then the infected USB stick should cause other systems to reboot.
  2. If this hypothesis is correct, then connecting the USB stick to the target system through a USB protocol analyzer, should show traffic that is not observed when a clean/benign USB stick is connected.


  1. In order to test this hypothesis, plug the USB stick into to two dissimilar systems. If it successfully reboots these systems, or causes them to enter into an error state, then that would prove this hypothesis.
  2. In order to test this hypothesis, connect the USB stick to the target systems, via a USB protocol analyzer, and analyze the instructions that are passed on the USB bus. The presence of messages that are a clear violation of the USB spec, or are corner-case implementations not found on on clean/ uninfected USB disk firmware, would prove this hypothesis. 


The claims presented by Dragos Ruiu were picked up by several online security news publications: ThreatPost.com, ErrataSec.com, ArsTechnica.com, PCWorld.com, and even antivirus vendor Sophos published an article which in response to the question ‘What we can predict’ noted “So the short answer to the question of what we have to say about BadBIOS is, “We can’t yet say.” An analysis however, of the forty claims put forth by Ruiu, shows that the answer to this question is self-evident: there in no substantive proof to back the claims that Ruiu has made — except for the notable exception of Assertion 34: the TTF font files missing a digital signature. What seems to have contributed to this story receiving attention that it did, is that the claims that Ruiu makes, are independently plausible, and have been so, in some cases, for years. Malware can jump airgaps using audio (Hanspach, Goetz, & Fraunhofer, 2013). It can hide in hardware (Stewin & Bystrov, 2013), even in BIOS. Malware infection via USB sticks is the norm more than the exception. Perhaps more importantly however, Stuxnet, Duqu, and Flame, have raised our expectations of what new infection and propagation methods well-funded malware writers will employ.  A side-effect of this increased expectation, appears to have been our acceptance of claims without critical review. Grimes sums up what is wrong with the badBIOS story in 4 points:

  1. There is no ‘smoking gun’
  2. Ruiu makes errors in causation (he doesn’t exhibit scientific skepticism)
  3. The scenarios although plausible, are highly unlikely 
  4. No other antivirus vendor anywhere in the world has found samples of this malware in the wild. 

He goes on to draw an insightful parallel to Stuxnet, worth quoting in full:

“When Stuxnet was discovered, multiple antimalware companies around the world were finding copies. It started with one, then quickly spread to the others — not so with BadBIOS. Somehow the most sophisticated superbug on the planet was released three years ago — and only Ruiu has found it. What would be the spreader’s motivation for infecting Ruiu? With Stuxnet, the motivation was to stop World War III. Does Ruiu or his lab have something on the same order that needs to be found out or stopped?”

It is a fair point to ask, and one that remains unanswered: Why has nobody else observed samples of this malware?

It may be the case, that Ruiu’s intention was to engage the infosec community in the process of understanding the odd behavior he observed on his home-office systems, rather than to assert that it was conclusively the work of a new breed of state-sponsored malware, but that Twitter, Google+, and Facebook did not provide an adequate means of doing so. As it stands however, the assertions made by Ruiu provide a good starting point: an interesting set of observations to guide data gathering and investigation.


This study has covered the weakness present in the badBIOS analysis as done by Ruiu, and provided suggestions for how it can be improved. The main failing was that it was not undertaken in a critical manner. Not enough questions were asked, there was a seeming abundance of confirmation bias, there were no alternate hypotheses, and yet there was no effort to disprove any of the assertions. Most troublesome for the researchers who wanted to help in this analysis however, was the fact that there was a lack of evidence. Ruiu’s social media posts are replete with offers to help in the analysis. As it stands now, a critical analysis of the data provided by Ruiu, and of the method of examination he employed, indicates that badBIOS is not real. Whereas it is possible that a couple of Ruiu’s systems were infected with some form of malware — which would account for some of the behavior he observed: such as the failure to boot from CD, or the font files with no signature; none of the assertions he provided indicate that the malware is of the technical complexity attributed to badBIOS.    

It is not all doom and gloom however. The strength of Ruiu’s approach was to think outside the boundary of what was considered plausible. As a colleague of mine put it, referring to badBIOS, “if it didn’t exist before, it does now.”

This analysis is available as a PDF or an ePUB.

Upcoming Questions on Monitoring

In order to piece together the events in what is currently referred to as a cyber investigation, security professionals use a mix of experience, investigative skills, and logs. System logs, network access logs, DHCP server logs, HIPS logs, IDS logs, and NetFlow records, to name but a few. An enterprise may for example deploy intrusion detection systems at various points in its network, in order to detect specific traffic patterns, or connections to known-bad servers on the internet. They may also deploy NetFlow to provide a phone-bill type of record of connections going in and out of the corporate network: which IP address talked to which other IP address, for how long, at what time of the day, and how much data did they exchange in each direction.

This data, captured from various log sources helps augment a cyber investigator’s forensic capability. It can for example, help provide attribution: help ascribe a certain action to a specific individual. Depending on the type of cyber investigation, this attribution can be essential, and just like with radio triangulation, having multiple sources of attribution can help accurately identify the user. Being able to correlate an IDS alert to a DHCP lease, to a system logon, to a host-based IPS message saying that a process tried to execute from writeable memory, to a message showing a specific file accessed on a file share — all this can be valuable to an investigator.

Collecting these logs from a variety of devices on the network forms part of the broader practice of security monitoring: it affords those tasked with defending the network from attackers, the chance to see anomalies and quickly mitigate them. This monitoring however, involves detecting patterns amidst human activity. As security monitoring becomes more prevalent, and as most of our online activity is logged either by our home internet service provider or by our employer, questions regarding the monitoring, usage, storage, and disposal of these records of human activity, will benefit from well-articulated explanations.

What follow are 5 questions which I see becoming more important, and quite possibly something security practitioners will be asked to provide some explanation on.  I don’t have complete answers for them, but have offered some thoughts on each, and would be happy to receive further comments on them. They are listed in no particular order.

1. How do we explain the seemingly unrestricted access that system administrators have to our data? 

We may be fine with security monitoring systems logging our actions on a network, but at some point, these logs have to reside in a data store or database in a datacenter. The question isn’t whether or not system administrators can be trusted with this data, but rather how to protect this data from accidental disclosure or wilful negligence, and if such an incident comes to pass, then to be able to trace the wrongful action back to an individual. This question can become particularly difficult to answer, in an age marked by an inherent distrust of authority.

One way to address this, is to make sure that the logs on critical systems aren’t stored locally, where they can be tampered with. They should rather be sent to a central logging repository under the control of the security monitoring team. Critics of the idea could argue that this becomes a question of “who is watching the watchers.” A response to this is that ‘the watchers’ in this case are held by a more specific ethics clause in their employment contract, specifically drafted for this purpose in consultation with legal counsel, and which they have to sign prior to taking up their work as watchers.

2. What should our attitude be towards individuals who have breached trust, hacked or social engineered their way into unauthorized data?

The current attitude seems to vary between admiration and punishment: ‘hackers’ are either apprehended or hired as security consultants. One may argue that breaching public trust is not a behavior that merits reward, and from a technical point of view, using a malware development toolkit to build and deliver a payload to exploit a recently-discovered vulnerability is not difficult. The equivalent in the physical world would be to take advantage of a new method of picking somebody’s front-door lock, breaking in, going through their belongings, and then selling them ‘burglary prevention services.’

The social engineering question touches us on a more personal level: we may have grandparents that have not adapted to  a socially un-engineerable world. And this raises the question: what is the long-term strategy regarding social engineering? Is the objective to live in a society where social engineering has been significantly reduced, but where the notion of trust has become so strained that we can’t afford to help a stranger without being suspicious of them? The notion of responsibility plays in to this, notably the old adage: “with power comes responsibility.” Having the ability to socially engineer unsuspecting victims need not be a reason to do so: just because we can, doesn’t necessarily mean we should. That is not to say that there isn’t a lot of work to be done in training individuals to resist a social engineering attack, only that the method whereby this information is conveyed should arguably benefit the victims more than extolling the skills of the attacker.

Continue reading

A Cybercrime Link to Criminological Theory

Just as “Criminology was born in a crime wave” [1], cyber-security, or countermeasures to address advances in computer-based crime, were born in the wake of a rapid-scale evolution in the complexity and in the damage potential of computer viruses. Whereas the 1980s and 90s were characterised primarily by viruses whose behavior could be described as disruptive, or whose impact could be described as an annoyance, the turn of the century saw a number of major computer security events, like the Melissa worm, the ILOVEYOU virus, Nimda, and the SQL Slammer worm. Thanks to the interconnected nature of the Internet, these viruses impacted a user count orders of magnitude larger than anything that had been developed in the previous two decades: they affected millions of computers.

The disruptive nature of these virus infections has continued to evolve, and is now characterised by behavior where infected computer systems can be remotely controlled by the attackers, and which for enterprises translates into the potential for intellectual property (IP) to be exfiltrated. These more sophisticated viruses have in turn led to the development of policies to assess the security posture of computer systems on a network.

In my day to day work, I see the application and enforcement of policies that have been directly influenced by the rise in criminal and malicious intent behind computer-based attacks. Whereas before it may have been acceptable to plug a computer into a corporate network, now security-minded corporations will have a “Minimum Access Policy”, which will detail the minimum conditions that the computer needs to meet, in order to be allowed on to the network, such as having an updated antivirus, updated software patch level, and so on. These policies can be enforced either by means of ‘Network Access Control’ systems, or by means of automated compliance-checking processes that scan the network trying to obtain a snapshot of the compliance level of the computers.

I would argue that the motivation behind these policies is influenced both by behavior of law theories as well as by theories of crime. One example of an element that is influenced by positivist behavior of law theories includes the practical measuring of the impact that unsecured computer systems can pose to an organization and weighing them against the threshold at which employees will render the policy ineffectual (such as enforcing a password-expiration policy, but requiring employees to change passwords so infrequently that the policy adds little value, or so frequently that employees write them down and keep them on sticky-notes). An example of an element that is influenced by theories of criminal or deviant behavior is the insider threat or the corporate espionage mole, that is placed inside an organization by a competitor or foe, waiting for the right set of conditions to be met, in order to commit the crime of IP theft or industrial sabotage.


Defining Counterfeiting

Perhaps reminiscent of the lack of a common definition of terrorism as used between different government agencies, and even between international Law Enforcement agencies, the definition of counterfeiting varies somewhat between the FDA, the WHO, the WTO, and the OECD. These differences in the definition pertain both to the markets where the products are sold, as well as (in the case of medicine or pharmaceuticals) to the composition of the product. As far as the markets are concerned for example, a distinction can be made between primary markets, where the counterfeiters’ intention is to deceive consumers and pass a fake product for the genuine item, and secondary markets, where customers knowingly purchase imitation goods.

The World Health Organization (WHO) notes in its ‘General Information on Counterfeit Medicines’ page that the legal definition for a counterfeit drug (as listed in Black’s dictionary) is one “made by someone other than the genuine manufacturer, by copying or imitating an original product without authority or right, with a view to deceive or defraud, and then marketing the copied or forged drug as the original.” This definition appears to classify a drug as counterfeit only if they are traded in primary markets — where an attempt is made to pass the counterfeit for the original. In what may be an attempt to address this inaccuracy, the WHO has developed its own description of a counterfeit drug, which adds fraudulent mis-labeling, fake packaging, and drug composition to the definition.

A common feature of the definition disparity, is that the term can be described differently by the international organization in question, than it is by any of the organization’s constituent countries. In the case of the WHO for example, the definition of a counterfeit drug by three of its constituent countries is less precise than the organization’s own definition. Nigeria for example includes “any drug product which is not what it purports to be” and “any drug product whose label does not bear adequate directions for use” as criteria for labeling a drug as counterfeit, and in the Philippines the definition is expanded to include a drug “refilled in containers by unauthorized persons if the legitimate labels or marks are used,” both blurring the line between counterfeit and sub-standard drugs. U.S. Jurisprudence on the other hand uses what is by comparison a more comprehensive definition that includes a distinction between the counterfeit drug manufacturers, processors, and distributors on the one hand who used any likeness to the original product without authorization, and those who are in fact the lawful manufacturers of the drug.

Continue reading

Adapting to the Changing Fraud Landscape

In 2007, the PwC Global Economic Crime Survey listed the losses incurred by companies as a direct result of economic fraud, at $4.2 billion. The FBI describes product counterfeiting asthe crime of the 21st century”, as it “accounts for an estimated $600 billion in global trade and wreaks dire global health, safety and economic consequences on individuals, corporations, government and society.” Product counterfeiting falls into the classic economic fraud category of Intellectual Property Infringement — and is a sibling of patent and trademark violation, and industrial espionage. The scope of the threat posed by counterfeiting therefore is not limited to the unlawful duplication and distribution of goods and currency, but rather to the “illegal acquisition of trade secrets or company information.” It is interesting to note that the 2007 PwC report shows that IP infringement cases were on the rise from 2003 to 2007, but that the 2011 PwC report shows that the percentage of companies that reported IP Infringement cases has decreased by more than half (from 15% in 2009 to 7% in 2011) and that the number of companies that reported incidents of Cybercrime rose from 1% in 2009 to 23% in 2011. Cybercrime, especially in its ‘Advanced Persistent Threat’ (APT) form, can be an vector used by dedicated attackers to penetrate organizations and exfiltrate intellectual property, which in addition to aiding the perpetrators develop a market advantage, can lead to the creation of counterfeit products that can have exact (circuit-board or software-level) similarities to the original. One can argue then, that cases of IP infringement have not decreased, but merely taken another form, and as counterfeiters improve these techniques, their attacks will be distributed across new as well as traditional categories of economic fraud.

How then, might classic fraud investigators need to adjust, in order to adapt to the changing fraud landscape?

Continue reading

On Lawful Intercept

Lawful Intercept, the technology that allows authorized parties to listen-in on electronic communications has conceptually been around for as long as there has been a signal to intercept — in his book The Code-Breakers, David Khan writes:

“Just as the telegraph had made military communications much more effective but had also increased the possibility of interception over that of hand-carried dispatches, so radio’s vast amplification of military communications was accompanied by an enormously greater probability of interception.”

Prior to radio or cable interception even, there was the interception, extracting, and tamper-hiding of hand-carried correspondence. In the 1700s for example, Vienna’s Geheime Kabinetz-Kanzelei was known as the most proficient in Europe, in removing the seals, copying message contents, and re-sealing diplomatic correspondence using forged seals.

The British Post Office Act of 1711, Khan explains, “permitted government officials to open mail under warrants that they themselves issued.” This formed a legal precedent for wiretapping and lawful intercept: letter opening was followed by telegram interception, which in turn evolved into the different forms of electronic communications surveillance that exist today.

Continue reading

A Study of Baluchistan and Jundullah

DISCLAIMER: This report was prepared for a Counterterrorism and Intel class as part of a Criminal Justice major, conducting research on a terrorist group and make policy recommendations. It should be read within this context.

Setting the Stage

Blindfolded and accused of passing information from the group’s activities to the Iranian authorities, Shahab Mansouri awaits his fate. Within minutes, his brother-in-law, then Jundullah leader Abdolmalek Rigi draws a knife and beheads him, while a video camera records the event.
To date, Jundullah, whose name has been translated as ‘Army of God’ has mounted an estimated 11 attacks, and been responsible for the death of over 175 people – although the group claims to have killed 400 Iranian soldiers. The term “jundullah” or “jundu’l-Malakut” is used in Islamic scripture to refer to the “heavenly armies of God, who bring the heavenly grace of God to the believers,” it does not have a militaristic connotation, although nowadays it is misinterpreted as such.

Rigi was 19 years old when he founded Jundullah in 2003. In an open letter to the International Community, Rigi offered in 2007 to “accept the Universal Declaration of Human Rights and all other United Nations conventions and resolutions”. In 2009 he wrote to U.N. Secretary General Ban Ki Moon, U.S. President Barak Obama, and Turkish President Recep Tayyip Erdogan, requesting a U.N. fact-finding mission to be sent to Baluchistan. In his report, Dan Rather commented that Rigi “did not sound like any Muslim extremist we’d ever heard”. Rigi was captured and killed by Iranian authorities in 2010, a few months after his 26th birthday. Following his death, Jundullah was labeled a Foreign Terrorist Organization by the U.S. Department of State because of “numerous attacks resulting in the death and maiming of scores of Iranian civilians and government officials.”

To help make sense of the rationale behind what can be labeled an odd fit for a terrorist group, we can look at the current spate of ‘murders in plain daylight’ that have been taking place in Iran. “In the bright morning hours of 28 October, a busy square in an affluent area of north-west Tehran became the scene of a bloody murder. This was no ordinary crime. It was more like a public spectacle, dragging on for an excruciating 45 minutes before the victim bled to death.” In another attack “a middle-aged woman stabs a man repeatedly in a busy street while a police car and later an ambulance watch on from a safe distance. … At one point, the attacker returns to her home unchallenged and emerges after a while, having put on the customary long Iranian coat. She then proceeds to stab the motionless victim a few more times.”

What measured foreign policy strategy can be considered an adequate response to deal with what appears to be a society buckling under the weight of 30 years of restrictive theocratic rule? Can the same reason that has driven an otherwise warm and hospitable people, with such a rich culture and over 2500 years of written civilization, to commit acts of barbarism, also have been the catalyst that drove 19-year-old Rigi to author attacks that would garner comments like “Since the establishment of the Islamic Republic in 1979, no domestic group has embarrassed the authorities in Tehran more than Jundallah”?

In the view one critic, “Unfortunately, and especially in the last five years and after the coming to power of Mr. Ahmadinejad, the Islamic republic’s policy has been to create division among Shiites and Sunnis, and among Baluchis and non-Baluchis. In recent years, this vicious circle of violence has been perpetuated on one side by Jondollah and on the other by the Islamic republic, with victims claimed on either side.”

Continue reading


Get every new post delivered to your Inbox.