Security testing: Plane trains and cars

There were news reports last week about a man (not named here) that was detained and then banned from flying on United Airlines because he issued some tweets about how the onboard Wi-Fi entertainment system might be used to access flight systems thus, compromising flights. The news reports included interviews with the man, pilots, and other “experts.” The man said there is “risk” and even posted some images supposedly of the risk on Twitter. The experts said there “is minimal risk.”
I do not have in depth personal knowledge of these exact systems. I have not been on a security test team directly trying to hack these systems, but I hope the manufacturers and the airlines DO have such teams in place for their embedded/IoT system. Other large companies have security test teams in place for their systems. Companies such as Target and Home Depot, likely now wish they had had more security testing in place before they were hacked. There are news stories and calls to action on security and testing by the politicians almost every day. There is even a new US government office in charge of such things just announced.
I can say that in some of my research for mobile/embedded/IoT error taxonomy that I have seen unexpected interconnects between systems within planes, which gives one pause not just from a testing-integration security perspective, but a development perspective also. The chance there can be “sneak paths” is real in complex electronic software-systems (Google on sneak circuit analysis for a good V&V activity to consider doing). In my book I point out various attacks which teams should apply to their systems. Now testing cannot be totally complete and assure 0% risk in the security world, but it seems like we could be doing more, and not waiting until “bad” things happen or our systems are on the news.
As to the gentleman who got himself banned from United, while he may have thought he was doing a public service, in my publications and classes when I discuss security testing, I pretty much always tell people the activities and attack patterns I define should be applied by teams who have been chartered to do security work. Making statements, which seem like a threat or actual attacking a system that one is not authorized to do, are at least unethical and in many cases illegal. DO NOT DO THIS! In the first situation where we are developing, running and maintaining modern computer systems, the stakeholders including testers have responsibilities to assure the qualities of what we create. In the second case as good citizens of the world, we also have obligations not to do things that are unethical or illegal. Please keep these things in mind as you build your testing skills. Security testing is one of the hottest areas to develop. Get into a sand box and start working security testing skills there and not in the public domain. Go for better security testing on the job.

Advertisement

IoT- The World of Software Grows into a Huge Continuum

Once we had software. It ran on large piece of hardware called mainframes and there were few direct users (circa 1950-190s) most of whom were “IT” people. Then came mini-computers and mainframes, and the priesthood of software-computer people expanded as did numbers and types of users (circa 1970s), but it was still a small club of members (maybe tens of thousands of people). And then in the late 1970 and 1980s, the age of the personal computer (PC) arrived. The numbers of software “IT” people expanded rapidly as did the numbers of users. System users found home and business computers. Other computer users found gaming systems to be popular. Users expanded into the millions. Also, the IT professionals divided into sub-professions such as programmers, systems engineers, software engineers, QA personnel, and testers (to name a few). Also, during the 1980s and into the 1990s, people became aware of software bugs and how costly “computer” issues could be. We saw the first worms, web site performance issues were on the nightly news occasionally, and other software “bugs” made people sensitive to the dark side of computers. Here we are in the 21st century and we have a mobile-smart device revolution. There are billions of people using software around the world on these tiny devices. Embedded software is going into more and more (all?) electronic devices. We are growing the Internet of Things (IoT). Almost every human is a software user (or they will soon want to be) and now there are many non-human users (computers talking to computers). The number of “contexts” in which we use software has become a vast continuum.
The continuum is not without bumps and truncate roots, but it roughly starts with programmable devices including ICE, FPGAs, and other simple circuits, continues on with small embedded devices, on to big embedded devices (systems), to IoT, to Mobile, to Mobile-Smart, to general PCs, to mini-computers, to large mainframes and even super computers. The systems we put software in are everywhere and many times users of these devices don’t even know they are interacting with software in computers. They do, however, notice when the system does not work as expected or needed. Are we working towards a Sky-Net (from the Terminator movie series) but not driven by terminator robots instead by little devices in your pocket? That remains to be seen.
So why this posting? I write about embedded and mobile software systems and the testing of these kinds of systems. Some call me an expert in these areas but every few months, I see a problem (a bug) that occurs in the continuum and the problem seems to defy easy classification of “oh this is a common issue like we see in PC-networks” or “this is a problem because the embedded software user interface is limited”. There are many software bugs that seem to be “universal”. There are other bugs that seem to be “clustered” to a region of the software continuum. I hear testers saying “I am X type of system, and so my test problem space can be limited. However, it is not always clear how to classify something as embedded, mobile, smart, web or IoT for each individual device-software.
What this expanding and inter-mixed continuum means to software testers is potentially interesting. Once upon a time, I focused my testing on the kind of bugs and approaches that were common to my embedded software device world. I could focus my tests. I did not worry about common, big data (bases), and user “feelings.” Now on some of my embedded testing I need to test these areas too. My test problem space is getting larger. Likewise, testers of PC systems did not use to worry about battery life, movement in environments, and signal drop outs. Now, maybe they should when they move to tablets mobile-smart phone apps, and even IoT. Testers in many cases should not limit their test problem space as they are saying above.
We, as testers, cannot stop learning and practicing our skills. We should not limit our test technologies to just test automation, exploratory, or human scripted testing. We should have books on attack testing, lessons learned, exploratory testing, classic systematic testing, and many others including books on programming, software engineering, art and general concepts. I have several hundred books in the IT space in my library, as well as books on philosophy, engineering, science, art, and creative thinking. I have historic standards, guides, and references and still, my library and knowledge are woefully incomplete. I do not know enough.
I will have more to say on IoT and the changing environments of software testing in the future. Catch me here.

Mobile/IoT: When is it a bug and when is it an improvement?

Last week the Chevy Volt care under went a massive recall to update the software (you can google on that). It seems they want to issue a software update to shut the car off after about 1.5 hours, because the car would be “running” (off batteries at first and the the small gas motor) and the user could miss it because the car is “to quite”.

Now to their credit, they do have warnings that sound when the car is left running and not moving for some period of time. This is good, but there still have been cases where the warnings were ignored, the car ran, and CO2 built up in a garage (this is bad). So they added a new feature to “fail safe” by shutting the engine down totally after the warnings and time period. Easy fix.

Those of us working with systems-hardware have long had the joke “we will fix that hardware-system problem in the software”. This is the great thing about software. We can do this.

But I was left wondering. Did testers report the “missing feature” years ago, but missed the CO2 build as an effect? Did the system have a comprehensive risk/failure modes effects analysis (FMEA) done? Many embedded/IoT system do have these very detailed analysis (my book talks about these).

Now many software developers will argue they met requirements and so the new feature was not to fix a bug, but to improve the system. I argue, there is cost with the recall that might have paid for a more comprehensive system-software-test FMEA. What else did they miss?

Security, Insecurity, and the Single Tester

Many of us have been writing and talking about the need for better security throughout the IT industry. I focus on Mobile, Embedded, and IoT, but security spans all types of software and systems. Now with all of the news reports, industry incidents, and government involvement (ref.: Obama’s speech this last week), there is recognition and expanding action but what should be next for security testing?
I don’t think there will be consensus. For example, some say better software development including good testing. Others want to build walls, cyber safeguards, and have cyber offense just like with the traditional military and police actions. Yet more want rules, regulations, and penalties for both sides of security (the companies and the bad guy hackers).
As in most complex things, and cyber IT certainly is complex, the answer will be yes to all of the above and many more. A key step has happened. Most all of the users and interested parties now seem to be aware of the problems. When people find out that I am involved in security testing, they ask “should we be scared of IT security” and I answer “yes and you should be more scared”.
There are actions being taken and many things to be considered. We have cyber security warriors in some places. I have written about the need to grow the number of cyber-tester security warriors. And while I realize that the testing community will not agree on my list of near term actions to take to achieve cyber-security warriors (experts), I think there are many possible paths toward becoming better cyber security test warriors. The general actions I think should be considered include:
1. Learn more about general software testing from books, classes and for some , consider certifications (ISTQB is not supported as a good idea by everyone but it can be an early step to gaining test KNOWLEDGE)
2. Practice tester skills (see AST skill list – TBD web site) and become an experienced and ever improving skilled tester (I have been practicing testing for 35 years, and still have more to learn)
3. Learn more about the hacker’s world and their skills (This means we need to become “good” hacker/crackers to be able to “fight fire with fire”)
4. Understand and work with government and industry regulations and standards (yes I know many of you don’t believe in them, but standards will be put in place and get abused, so we should work to make standards and policies as acceptable as possible and then know how to use them correctly)
5. Know more about how to better develop software including security and other qualities (this means we must be more that testers, e.g. be software and system people)
6. Understand risk-based testing driven by the integrity level of the software (IEEE 1012 and ISO 29119, and again, I know that some people dislike these standards, but they represent a low level starting point from which to tailor processes, techniques, and documents)
7. Be better practiced in testing non-functional elements of the software-system including quality testing, model-driven testing, math-based test techniques, and attack-based exploratory testing (these approaches are often not understood or poorly used “tools” of our industry, and testers should have a great many test techniques that they can use beyond just checking requirements).
I know quite a few software people and testers feel that many of these ideals are “wrong” and even toxic. I hear that software and testing are arts, and we need more creativeness. True, but software-testing is much more. I hear that we need more rigor using math or models in engineering development and test. True, in part, but software is more than just science and engineering. I hear we don’t need regulations for our test industry because it is “too young”, or because that restricts free thinking and lets managers hide from the “hard work of testing” by claiming “dumb” conformance to meaningless documents. There is truth in these statements too, but every discipline started some place (read the history of the early books on medical anatomy), and having some regulations can force better development behaviors than the current “open season” in the wild wild west of software security. For example, clean air regulations have helped to keep the air clean in many USA cities in my life time). We should not too quickly dismiss standards.
We will never solve all aspects of cyber security. Just like security in everyday life, we will need the police, military, artists, and engineers. This has not changed for thousands of years. Cyber has just given the bad players a new environment in which to do crimes, make war, and do evil things. Most of us would not trade the benefits that IT gives us, so we must deal with the costs that cyber brings. Security is one of those costs.

Privacy, Security, IoT, and the Car

This story is rapidly being overcome by events at a particular and unnamed automobile manufacturer, but illustrates one basic risk of embedded-IoT (internet of things) devices. To make a long story shorter, the manufacturer has decided to replace their particular software-system related in this story due to many problems reported with the system, most of which dealt with functionality and usability. Those reports are not a consideration in this post. However, replacing any vehicle’s system and/or having recalls on a vehicle (or by a company) can be very costly. As testers, we should help to avoid these types of issues by providing information about the functionality–and these days—about the privacy-security of systems. (In testing, just like with the TSA, “See something? Say Something.”)

What I noticed with this bug is an integration-privacy issue, and I suspect this type of bug will reoccur all too often in other IoT systems. Testers, as well as company executives, should take a lesson from this story. The feature of this car system as outlined is what happens when you interface two devices together without fully informing your users that systems are being “integrated” with information sharing; the impacts of such integrations; and the associated risks. I credit my wife for getting stung by this or rather making this finding.

In this scenario, the user could access a USB/ power outlet in the car, which one would normally use to “get power.” In this case, my wife plugged her cell phone into the USB/power outlet. The car’s system recognized the device, and allowed “hands free” control of the device while providing power.

Great, right? Well, as in most stories, there is more to it.

The car’s system was able to read information from her cell phone, including phone numbers, user name, and email information (and we are not sure what else). This data was persistent in the car’s system, because my wife could see five other users of the car’s system. You guessed it, it was a rental car, and each renter had plugged their phone into the car’s system and had their information “downloaded”. So, ask yourself this question, when renting a car, “Do you expect to have all of the information on your cell phone downloaded to the car’s systems even if you only wish to plug into a power outlet to recharge your phone?” And to take it one step further, “Do you expect anyone else to access the information on your cell phone once you unplug your phone and exit the car?”

This is at least a privacy concern feature (I’d call it a bug), and I would be willing to bet, none of the other users knew this information was being “shared” as it was not disclosed to them through the on screen help nor when they rented the car. Further, having reviewed the user guide, I saw no disclosure there were such “features.” If I were a “bad guy” hacker, I could probably start to use the car’s information as a starting point for an attack on any users of rental cars with such a system. So, features of compromised privacy can become a security risk too.

Additionally, a dealer of the type of car (not the rental company) had access to this “leaked” information, because not long after my wife’s rental car period was over, she received “welcome” advertisements from a dealer of the car offering discounted service features. Obviously the dealer did not really understand how the car’s system was working since they thought my wife had bought the car. We are not certain how much the rental car company knows about this sort of “issue.” The blind leading the uninformed.

So here is the moral of this story for security-IoT testers: you must test beyond the required functionality of the system, and assess other quality characteristics. The hackers are breaking into retailers, major companies, and attacking everyday people. IoT testing must consider privacy and security issues of these devices. This means IoT-embedded testers must think at the system level (this was an integration between systems), and go beyond basic functional testing. For this system, because it is being phased out, I am guessing that even functional/user interface testing was done poorly and so probably nothing was done with security and privacy. Sad.

Many companies tend not to worry about these issues until they have a major and costly breach. Then, they scramble and try to fix the problem and spend lots of money in litigation efforts. As testers, we should move the finding of issues (testing) to the left, and explain to the decision makers (management) about the risks (bad publicity, lost money, rebuilding or replacing systems, lawyers, lawsuits and court costs, etc.) of such problems, before problems escape into the field and onto the nightly news. (No, I did not call any reporters. Tempting though.)

This has been a food for thought story. Use it to think and explain to management about IoT-Embedded security-privacy risks. And, think twice before plugging your cell phone into a power outlet in one of the newer cars—anyone’s.

Testing Snowcat software

Image

After two months on the road, I find myself at home getting ready for winter at the same time upper New York State is being buried by snow and using special “snow machines” (this is what the news called them or what I call a snowcat) to save people. I have “tested” one of these machines. Yes they have software.  And yes they have bugs.Snowcat

I posted on tweeter a question about how one would go about testing such a software-system. Of course the simple answer is that you would test it, just as any piece of software (test requirements and functions), but this can be lacking. My first answer (and probably last phase of testing) would be to do field testing, which is where I found my bugs.

In this case I mean testing in the field with a user. Here is what I did:

Field conditions defined = Snowy mountain side (3 to 5 feet of snow), at night, in the cold (-10 C), novice user (users must be considered), and is machine warmed up to a good running temperature  (these systems take a while to “warm” up)

The exact definition of these should be captured in the test setup, but I won’t do that in a blog posting.

Test defined = Novice user runs machine. Adjust machine blade setting. Adjust machine power settings. Adjust machine speed settings. Adjust machine direction settings. Vary settings (combinatorial test attack) from low to high.

I was doing exploratory testing but guided by an attack. Here is what I found and could repeat.

Bugs report =  Bug 1) Unexpected (and documented) machine safe mode (engine running but no movements)  entered when combined rapid inputs of power, speed, direction, and blade done at the same time. No user and safety warnings were documented about this combination being “not permitted”. System had to be reset (powered off and wait 60 seconds) to clear the error condition. Bug 2) Micro process computer warning message “110a” did not inform user of actions to take, was meaningless (like message #404), and no user documentation of this message type was provided in operations guide. Bug 3) In a better system design, before safing the system, an alarm warning the user to “stop” current input actions, might have been sounded to improve usability and safety of the system to avoid system safing.

So in my partial answer to my own question, you can see what I did and found. These bugs made a user “unhappy” (we want happy users).

There should be more to testing an embedded system that what I outline here. Having a user-tester find such errors in the field can and should be avoided. We have a long way to go in testing embedded software control devices. Some industries get it and some are still learning.

More mobile security hack stories => better testing needed?

I continue my worry (paranoid?) about mobile and embedded security, hacking, and lack of quality testing efforts. Check these links out:
inflight wifi hacks
https://www.yahoo.com/tech/researcher-says-airplanes-can-be-hacked-via-in-flight-93967652124.html

thieves hack key fobs
https://autos.yahoo.com/blogs/motoramic/it-s-official–car-thieves-can-hack-your-keyless-entry–insurers-warn-142252463.html

So am I paranoid or are they really out to get us (development projects)? What is the cost to us and does the cost justify any added security testing? Will standards, e.g. ISO29119 and government reg’s drive testing or will the market?

My guess is some places and project will take mobile/embedded security testing seriously and some won’t and the users will be left to vote with their feet. As individual testers I think we provide information to our development teams so the context of the project can help decide what is needed.  In James Whittaker’s books and my book on software test attacks (available on Amazon), there is the starting point for security testing, but as much as I know, there is far more that I don’t know on security testing.

Not exactly mobile or embedded related – tester certifications/BOKs

On other sites, I’ve written about tester certifications and skill lists. I have supported certifications and skill definition efforts because I believe, that while there are abuses with certification, gaining knowledge in a field is part of being a profession and we need bodies of knowledge (BOK) as starting points. However, I do agree that the software test industry still is maturing and so information gained in a certification or standard should be treated with some level of care (does is it work, when does it not work or fail, when should we change what we know, etc.) I bring these points up, because as the risk of software increase because of things such as failures and security issues, the pressure to have “certified” engineers will increase (see http://www.computerworld.com/s/article/9250174/Cybersecurity_should_be_professionalized?source=CTWNLE_nlt_security_2014-08-06 for example). Groups like IEEE and ISTQB promote certs. State governments already regulate the word “engineer”. The current certification bodies of knowledge (BOK) may be incomplete and/or wrong, but just because a BOK is not perfect, does not mean we should ignore and discount it, but that we should work to make them better. Sooner or later, the BOK will become “law” and the expectation of employers.  Not every project, domain, or area of software will need certs, but areas that I work in, such as embedded and mobile, where life or large money losses may be at risk, will likely get focus for certs sooner.

I hope more people will become involved in certifications, both the production-use of them, and critic.

Mobile world seems slow to close security holes

Various researchers have report security-bugs, but they remain open for years, see:
http://www.computerworld.com/s/article/9250110/Android_vulnerability_still_a_threat_after_nearly_two_years?source=CTWNLE_nlt_security_2014-08-04

I have written and reported on mobile security issues and testing. However it feels like until somebody actually exploits a fault and the exploitation makes the news, many vendors and app providers do nothing.

I do not put any information on my phone that I don’t want public. I am even unsure about using many mobile web apps with register logins. If the mobile app world wants to become trusted, I feel they must do a better job, but just like the web world during the “.com” bubble of years back, I suspect it will take time until companies come to understand the importance of quality and security of the mobile world.

It is sad.

More recall on cars (mobile) for software issues

I’ve been to the other side of the world and back. I talk testing where I go. A few people listen and work to improve their skills, but it seems testers are fighting an up hill battle most of the time. Software grows into everything (IoT, Mobile, Cars, Medical), and so the bugs continue. You can find recall numbers just by simple Google searches. As testers we need to provide the information to those stakeholders making the releases of these devices. They need to know if more testing is needed as it seems recalls are “to numerous”?