By Veronica Schmitt
This article was originally featured in PenTest Magazine.
Once, I was pretty much a regular girl. That was before my improvements, which made me more Cyborg-like than pure human. I was unlucky that the “natural” pacemaker in my heart failed. In the end, I had to hand over this very important function over to a machine. I had to get an implantable medical device to save my life. As someone who is interested in the workings of these wonderful machines, it became difficult getting medical staff to answer my questions regarding them. Is this device really secure? I know it will help me live, but can someone pwn my heart? The silence received, when raising these questions left me despondent. I then started doing my own research to try and identify why there is so little thought given to security in medical devices. Surely, the matter of life and death here should play a factor over convenience and power?
The question is not: “what kind of person would hack a pacemaker?”. The answer to that one is easy – it is the same kind of person that walks up to someone and stabs them. The world is full of bad people doing bad things. Now, we live in an age where we are adding each device to the Internet of Things because it is convenient. I cannot see why we should make it easier for someone to have unlawful access to my heart.
I agree this sounds like a scene from a sci-fi movie. This is, however, the struggle facing many individuals who require medical devices to survive. There has been big technological advancements and it would seem that even though these devices last longer and do more, they remain insecure and open to attack. Medical devices are becoming more sophisticated; however, the sad truth is that manufacturers are playing catch up dealing with legacy security flaws. The dilemma for the individuals with these devices is real - we need our medical staff to have immediate access to the device. The flip side of this coin is that we also need to protect these sensitive devices from outside access. This is one big catch-22. If we increase security, do we shift the paradigm to reduce access and availability? My concern, as someone that sits in the middle of this, is what happens when the first device is compromised and the patient dies? This could have a profound impact on the future of patients not wanting these devices.
These devices have been shown to be prone to attacks, which could have significant effects on any patient. Recent research, published by Whitescope IO Blog, showed that there were major system-system vulnerabilities with medical devices and the units they communicate with. In simple terms, a pacemaker is a programmable computer with an antenna. There has been a rise in the attacks from ransomware authors, which have left the public more vulnerable than ever. The fact that these devices do not have signed firmware opens them to the risk of being reloaded with custom firmware that can hold a pacemaker ransom. Worse yet is that these devices only require for the telemetry wand (the device which is used to interact with the pacemaker) to send the initial message to open long distance communication channels. The Internet of Things might as well be relabeled as the ‘Internet of Healthcare’ - there is a magnitude of medical equipment now being wirelessly connected (with little to no security measures built-in). Both healthcare and security professionals has to have the conversation about the flaws and how we can in unity make them better and less vulnerable.
These devices come with legacy problems and lack of security. The focus should shift from being a convenient life-saving device to being a secure life-saving device. The research that has been done by multiple researchers have shown that these devices fail basic appsec. An example of this is that once authentication has been successfully done by mimicking the telemetry wand authentication mechanism, an attacker is able to flood the pacemaker - there are no defences against replay attacks. This type of attack also keeps the pacemaker from entering sleep mode to conserve battery power and has the potential to drain the device battery quicker. For anyone who has a device, this is a devastating consequence. Not only can your device potentially be drained - it can be caused to malfunction. As someone who lives with the vulnerability it scares me and makes me concerned. To date, none of the manufacturers have alluded to any devices being compromised whilst implanted in a person (however there has been numerous recalls on St Jude devices).
Imagine a scenario where someone intercepts communications from a distance using radio-frequency antennas, thereby capturing all the data sent from the medical programmer to your device. These devices are vulnerable to Man-in-the-Middle attacks with a live session. Research published at the 2008 IEEE Symposium on Security and Privacy showed that they were able to rather easily reconfigure a device to fail to defibrillate the patient and beat inconsistently. Granted, I am not on the top of some assassination list (as far as I know), but I do not believe that makes my concern less valid.
Another big concern is when medical professionals encourage patients to use home systems connected to the IoT. This will enable these devices to be able to be accessed across the globe and configured. In my opinion - as both a security researcher and a patient - this creates a scary scenario. The home system does not offer secure protection to ensure that your heart will not be pwned. We all know being connected to the IoT means risking compromise. Patients should not be in the business of connecting the machine that keeps them alive to the IoT. This is the equivalent of placing your most valuable possession on the pavement in the hope that no one steals it.
We trust pacemaker manufacturers. We entrust one of our most valuable life functions to them, without having much choice in the matter. However, the code that runs these devices remains proprietary and cannot be collaborated on or tested. In the open source community, big strides have been made to make code development more secure. Perhaps following a similar approach than the open source community progress can be made to better these devices.
We should not be in the business of sacrificing security for convenience or power. As a patient, I would rather sleep knowing my device has been hardened and have the inconvenience of replacing it more regularly than the converse. I feel that we, as the security community, should be addressing and assisting medical manufacturers with the security vulnerabilities in the devices that literally keep people alive. There should be more effort placed on addressing the security vulnerabilities. The simple fact is we are not dealing with just ones and zeroes. This is for some a life or death situation. With the malfunctioning of these devices the patient who has this implanted in them run the risk of dying. Power and convenience should not trump safe and secure. Together as a community the difference can be made to better these necessary devices and make them safer to use.
Veronica Schmitt is a veteran digital forensic scientist, malware researcher and Partner at DFIRLABS. Chat to her on Twitter here.
Read Part I of this post here.
Last time, I gave a little background on digital evidence – where it comes from, why it’s relevant and how it should be gathered. Now we get to the interesting, finicky part: how does the legal system deal with digital evidence? (Disclaimer: this is the part where I say that I’m not a lawyer, no matter how much I loved Suits and The Good Wife).
I am from South Africa, so I’m mainly coming from a South African legal context and perspective (this is not to say that these issues won’t affect you if you’re from another country – these considerations are rather universal, and different countries have different ways when it comes to addressing them).
In a great informative paper written by Prof Murdoch Watney on the South African legal position regarding electronic evidence, the legal questions and concerns surrounding digital (or electronic) evidence are grouped into two main categories:
While these issues can legally get quite complex, I’ll be addressing it from the digital forensic examiner’s point of view – the ways we try and ensure that the evidence we extract is correct, unaltered and interpreted correctly.
Everyone who’s worked with a computer likely knows how easily files and their metadata can be altered. The mere act of logging onto a computer (no matter if you’re the end user or the investigator) can alter the device’s state, thereby altering the source of evidence – now we’re seeing issues of integrity and originality being raised, which can easily influence the evidence’s admissibility in court. This is why we use the process of “imaging” (which I briefly mentioned in Part I) to preserve evidence correctly. Imaging a device results in a read-only image or “clone” of the original device’s entire storage (or file directory, depending on the needs of a case). The data/potential evidence is essentially stored in a forensic container and verified via hashing. During the verification process, the hash calculated over the forensic image is compared to the hash calculated over the original evidence in order to ensure that no alterations occurred during the imaging process. Imaging also has the advantage of eliminating our reliance on the device that the evidence was found on, i.e. the suspect’s mobile phone or laptop, since we now have an image file that we can safely store and work with.
I was watching a fabulous Korean legal drama called Witch’s Court (마녀의 법정) the other day. However, in one of its less-fabulous moments, the prosecutors had obtained a tablet that contained incriminating video evidence that they were building their case around. First, they watched the videos on the tablet itself (no forensic imaging, or any attempt to preserve the evidence, was made!). Later, they discovered the videos had been purposely deleted off the tablet - while in their custody - due to an app installed on it. Cue the panic that their evidence was gone. Meanwhile, I was incoherently yelling at the screen that this wouldn’t have been a problem if your forensics people had been following the proper procedures! (I’m super fun at parties). Luckily it worked out all right in the end, if you’d been stressing.
Prof Watney goes on to talk about how the evidential weight of an exhibit is decided by the court, and how several guidelines as laid out by Section 15 of the Electronic Communications and Transactions Act 25 of 2002 must be followed. One line I’d like to draw attention to in Prof Watney’s paper is this one: “…in using these guidelines a court will probably need some expert help to understand technical procedures…”.
It’s easy to misinterpret digital evidence, especially if one doesn’t have a technical background (or even if you do, sometimes). One of the most infamous examples of the misinterpretation of digital evidence is the Casey Anthony trial – two separate forensic tools gave differing outputs after parsing a Mozilla database, and the prosecution’s case suffered when it was determined that their interpretation was incorrect (a fantastic, technical breakdown of digital forensics side of the case circumstances can be found here). The sheer complexity of the systems that are being dealt with in digital forensics – whether it’s Windows, Android, iOS, or any of the numerous third party data structures we find within these environments – means that a deep technical understanding of computers is a must.
Avoiding mistakes like these is why it’s so important that a digital forensics examiner understands not only the systems and data structures of the evidence they’re examining, but also the workings and limitations of the forensic tool being used to conduct the examination. This means education and training in the technical aspects of IT systems; it means quality assurance on all forensic reports originating from a digital examination and analysis; it means the verification and validation of forensic tools. It means that the digital forensic examiner must do everything in his or her power to ensure that the interpretation of evidence provided is as accurate and correct as possible – because ultimately, people’s lives and futures may very well be on the line.
Saskia Kuschke is a digital forensics examiner and resident pop-culture reference generator at DFIRLABS.
I’ve recently started watching season 3 of FOX’s detective show Lucifer. I’ve always enjoyed the show, despite how improbable it that the resident forensic scientist appears to specialise in digital forensics, toxicology, ballistics, DNA forensics or whatever the plot requires - though I suppose that’s hardly the most unrealistic thing in a show about the devil solving crimes in Los Angeles.
Something I’ve observed is that whenever a case grinds to a halt in this show (and many other police procedurals), some piece of digital evidence usually comes to light to save the day – whether it’s suddenly-unearthed CCTV footage, or a photo of a number plate, or some incriminating emails or Google searches.
So let’s talk about digital evidence (DE) – what it is, why it’s the 21st century’s new treasure trove for the inquiring investigator, and the legal niceties surrounding it.
What is Digital Evidence (DE)?
Our entire lives are becoming increasingly more digital. We socialise on Facebook, show off on Instagram, argue on Twitter and mess around on Reddit. We do our work on laptops and smartphones, and play games on consoles and desktops. We order things online, we do our banking online, we regularly query the great oracle Google regarding the mysteries of the universe and where the nearest Chinese takeaway place is – that sort of thing.
Is it any wonder, then, that crime is another thing happening in the digital space?
The US National Institute of Justice gives us this definition for digital evidence:
“Digital evidence is information stored or transmitted in binary form that may be relied on in court. It can be found on a computer hard drive, a mobile phone, a personal digital assistant (PDA), a CD, and a flash card in a digital camera, among other places.”
I like this definition because it emphasises how many sources of digital evidence there can be, beyond your basic computer and mobile phone. Any device that stores or tracks data in some way is a potential source of evidence – yes, even your FitBit.
How is DE gathered and preserved?
This is where the DFIR (digital forensics and incident response) gentlemen and ladies come in. Gathering and preserving digital evidence is not simply copying and pasting a piece of data and handing it over (we’ll get into the reason for this when we get to the legal issues in Part II). There are accepted and standardised methods for acquiring data for evidential purposes, and special tools to facilitate these processes.
The Scientific Working Group on Digital Evidence (SWGDE) outlines the best practices when conducting an acquisition. The entire document can be found here (the document is at version 3.1 at the time of writing), but the basic idea is:
Since this post is getting a little longer than I anticipated, I’ll be talking about the legal issues surrounding digital evidence in Part II.
Saskia Kuschke is a digital forensics examiner and resident pop-culture reference generator at DFIRLABS.
In our first three years we have done some really interesting cases, ranging from stock standard fraud cases committed using computers, to rather complex hacking cases. It has been great working such a variety of cases and being successful in each one. We set about being different and holding ourselves to the highest international standards in digital forensics, and I think that this has shown itself in the cases we have worked on and the results we have achieved.
One of my fondest memories in the last three years was when we convinced a fairly large corporation, that had regularly used another digital forensics service provider to give us a try on a case. We were happy to do the case pro bono so that the corporation could then compare apples with apples, and compare us with their existing digital forensics service provider. After doing the case and presenting what for us was a relatively straight forward affidavit, the client called and said to me "so is this what I should have been getting all along?"
That really made me happy, because we have prided ourselves on not simply sticking with the status quo when it comes to digital forensics in South Africa, but in trying to raise the standard in our profession.
Today is thus for me, not a celebration of only three years as a practice, but a celebration of daring to be different, or daring to aspire to the highest standards of scientific professionalism. After all at the end of the day we serve justice, and if we strive to be better, then the cause of justice is better served, and guilty people are convicted and innocent people vindicated. That is the real celebration.
Those of us that have done any kind of programming will understand the significance of the title of this post. It is generally the first thing that we learn to code. A simple program that displays the test "Hello World" on the screen of the computer we are using. I suppose the real world analogue would be a baby learning to takes its first steps or make its first sound. In many ways the post launches the DFIRLABS presence on the Internet, our way of saying "Hello World", even though we have been in practice since 2014.
DFIRLABS represents a new step in a 25 year journey of helping people. After school I joined the South African Police Service and was recruited into the elite Commercial Branch while in Police College. I loved investigating white collar crime, it was like playing a game of chess with a chess master and each case allowed me to match my wits against some very smart criminals. When I got the chance to join what I saw as a more focused agency, I did, and joined the Special Investigating Unit, as a specialist investigator, where my computer skills were put to go use, and I began practicing digital forensics. I had finally embraced by real passion which was computer science, and was using it to make a real difference in the cases that we investigated.
I was very grateful to a number of people I served under in the Special Investigating Unit who gave me the space and opportunity to grow and develop myself. Adv. Willem Heath allowed me to show how digital forensics and analytics could enhance investigations and gave me the space to function. Adv. Willie Hofmeyr gave me to opportunity to set up a world class digital forensics laboratory to serve the Special Investigating Unit and other South African law enforcement agencies. Adv. Nomvula Mokhatla gave me the opportunity to begin helping other agencies throughout Africa. All of this opened up the possibility to me that I could begin to make more of a difference outside of law enforcement than I could in it.
As an independent forensic scientist I can now be true to the pure ethos of forensic science and simply focus on the facts, the science, and help anyone who needs our help.