ISO 29119 and rent seekers

I have already written a few things on the ISO 29119 software testing standard. I am the IEEE project editor for this actual suite of testing standards.

One item I have not addressed that the loyal opponents complain about is what they call “rent seekers”. As I understand it, these are people whose primary motivation for doing something is because it can make them money with minimal (or no) benefit or it can have a negative impact on the industry as a whole. Various standards writers, training providers for certifications, tool vendors, and others get this “label” . While it is true many people do these things to make money (we all have to make a living), in my opinion, there is nothing wrong with making money if there is an overall positive benefit from the activity being performed.

Now it is possible to debate if ISO 29119 in the long run will have a positive benefit to industry. Studies and many project data points are required before the pro or con of the “benefit” of 29119 can be determined. My estimation is that this will take a decade or more to happen.

I can say that my knowledge of the writers and voters on ISO 29119 tell me that many of them have yet to receive any positive “rent” income. In fact, many members of the ISO working group have spent far more money to attend meetings and do the work than have actually gained. As for me, I have spent thousands of my personal monies to write, present, and vote on the standard than I have received in compensation. I do not expect this inequality to change any time soon nor am I looking to cause my income to change with the implementation of this suite of standards. My work on ISO 29119 has simply been to provide some (not all) in the test industry a worldwide standard and a beginning point to improve the industry.

Note, I did not say there would not be people and companies that do not make money from 29119 or other standards. I am sure there will be training, audits, and consulting as a result of the adoption of ISO 29119. There will also be people that make money “fixing things” when part of a standard such as 29119 goes wrong, as every “ideal” can be subjected to misuse (in the wrong hands or with misbegottten intent).

We all must make a living, but I for one won’t be making much money of ISO 29119.

Advertisements

Silos and IoT testing

I have been writing and thinking about the “silo effect” as applied to testing. Silos are where the “circle of influence” one travels or works in, impacts your thinking. For example, if you only think about testing and talk to other testers, then you may miss important ideas from development, operations, management, etc. Silos in part caused the housing crash of 2008, because many people did not see the risk in sub-prime loans.
For testers, we need to fight being stuck in a silo by knowing things such as: testing, development, support engineering, users, management, and many many others.
For IoT it will be worse, as IoT software testers will need to “expand” into hardware development, hardware testing, big data analytics, ops, and yet more areas. Learning and being skilled in many areas for IoT will be necessary and take a lot of effort. How fun.

Complex Embedded/Mobile/IoT Software Is Common and May Be The Weakest Link

The public now has expectations about the software in modern devices, such as their car or their phone. The have had experiences with Apps and see news stories, such as:
http://www.nytimes.com/2015/09/27/business/complex-car-software-becomes-the-weak-spot-under-the-hood.html?smprod=nytcore-ipad&smid=nytcore-ipad-share&_r=0

Many of us have been writing and preaching about the activities that should take place in developing such software driven devices. Parts of the software industry know some of the things that should be done. There are many software related books (hundreds including mine), societies (e.g., IEEE, ACM, ISO, and AST), conferences (seems like one every week), schools (Agile, traditional, Dev-Ops, context-driven), and standards about systems, software, and testing (e.g. 12207, 15299, and 29119).
However, the industry is adding software to everything with millions or even hundreds of millions of lines of code. The software is adding functionality to devices and (hopefully) making life better, but the industry (not just the automotive makers) still struggle with how to do software “right”.  A Forester Study indicated only 1% are mature and another 14% are maturing with the others not so mature. At the same time, the public is beginning to demand better protection, regulations, and software. They will accept some software qualities, but as software costs rise in terms of money, time (wasted by users), and company reputation, then getting the right level of software quality will grow as more companies try to become mature in Mobile/Embedded/IoT.
Now what is “good enough” software will vary device-to-device, system-to-system, and even user-to-user. The government will set baselines, courts will determine common law, the public will vote with the money, while manufactures struggle to get “good enough” right. Cars are just the tip of a large iceberg. As IoT grows and the amounts of software and data expand, the software-computer industry will continue to be challenged. In some ways the software challenge is not new. I have been reading about it my whole career (35 years). The industry knows many concepts which can help and argues about others.
The help available to industry is contained in the sources mentioned above, but many of the references are underused or missed all together. There is no one “best” right way or reference, but many software people use only one or two ideals as if there were a “best.” Engineering has to be concerned with heuristics. Because engineering is about heuristics, testers, developers, managers, and support people all need to have knowledge of engineering references and have the practiced skills to do trade-offs between the options. However, it seems that many people are stuck in their “silos” and miss references/ideals until it hurts their software system product.
I will have more about how we are all stuck in silos later.

Security testing: Plane trains and cars

There were news reports last week about a man (not named here) that was detained and then banned from flying on United Airlines because he issued some tweets about how the onboard Wi-Fi entertainment system might be used to access flight systems thus, compromising flights. The news reports included interviews with the man, pilots, and other “experts.” The man said there is “risk” and even posted some images supposedly of the risk on Twitter. The experts said there “is minimal risk.”
I do not have in depth personal knowledge of these exact systems. I have not been on a security test team directly trying to hack these systems, but I hope the manufacturers and the airlines DO have such teams in place for their embedded/IoT system. Other large companies have security test teams in place for their systems. Companies such as Target and Home Depot, likely now wish they had had more security testing in place before they were hacked. There are news stories and calls to action on security and testing by the politicians almost every day. There is even a new US government office in charge of such things just announced.
I can say that in some of my research for mobile/embedded/IoT error taxonomy that I have seen unexpected interconnects between systems within planes, which gives one pause not just from a testing-integration security perspective, but a development perspective also. The chance there can be “sneak paths” is real in complex electronic software-systems (Google on sneak circuit analysis for a good V&V activity to consider doing). In my book I point out various attacks which teams should apply to their systems. Now testing cannot be totally complete and assure 0% risk in the security world, but it seems like we could be doing more, and not waiting until “bad” things happen or our systems are on the news.
As to the gentleman who got himself banned from United, while he may have thought he was doing a public service, in my publications and classes when I discuss security testing, I pretty much always tell people the activities and attack patterns I define should be applied by teams who have been chartered to do security work. Making statements, which seem like a threat or actual attacking a system that one is not authorized to do, are at least unethical and in many cases illegal. DO NOT DO THIS! In the first situation where we are developing, running and maintaining modern computer systems, the stakeholders including testers have responsibilities to assure the qualities of what we create. In the second case as good citizens of the world, we also have obligations not to do things that are unethical or illegal. Please keep these things in mind as you build your testing skills. Security testing is one of the hottest areas to develop. Get into a sand box and start working security testing skills there and not in the public domain. Go for better security testing on the job.

IoT- The World of Software Grows into a Huge Continuum

Once we had software. It ran on large piece of hardware called mainframes and there were few direct users (circa 1950-190s) most of whom were “IT” people. Then came mini-computers and mainframes, and the priesthood of software-computer people expanded as did numbers and types of users (circa 1970s), but it was still a small club of members (maybe tens of thousands of people). And then in the late 1970 and 1980s, the age of the personal computer (PC) arrived. The numbers of software “IT” people expanded rapidly as did the numbers of users. System users found home and business computers. Other computer users found gaming systems to be popular. Users expanded into the millions. Also, the IT professionals divided into sub-professions such as programmers, systems engineers, software engineers, QA personnel, and testers (to name a few). Also, during the 1980s and into the 1990s, people became aware of software bugs and how costly “computer” issues could be. We saw the first worms, web site performance issues were on the nightly news occasionally, and other software “bugs” made people sensitive to the dark side of computers. Here we are in the 21st century and we have a mobile-smart device revolution. There are billions of people using software around the world on these tiny devices. Embedded software is going into more and more (all?) electronic devices. We are growing the Internet of Things (IoT). Almost every human is a software user (or they will soon want to be) and now there are many non-human users (computers talking to computers). The number of “contexts” in which we use software has become a vast continuum.
The continuum is not without bumps and truncate roots, but it roughly starts with programmable devices including ICE, FPGAs, and other simple circuits, continues on with small embedded devices, on to big embedded devices (systems), to IoT, to Mobile, to Mobile-Smart, to general PCs, to mini-computers, to large mainframes and even super computers. The systems we put software in are everywhere and many times users of these devices don’t even know they are interacting with software in computers. They do, however, notice when the system does not work as expected or needed. Are we working towards a Sky-Net (from the Terminator movie series) but not driven by terminator robots instead by little devices in your pocket? That remains to be seen.
So why this posting? I write about embedded and mobile software systems and the testing of these kinds of systems. Some call me an expert in these areas but every few months, I see a problem (a bug) that occurs in the continuum and the problem seems to defy easy classification of “oh this is a common issue like we see in PC-networks” or “this is a problem because the embedded software user interface is limited”. There are many software bugs that seem to be “universal”. There are other bugs that seem to be “clustered” to a region of the software continuum. I hear testers saying “I am X type of system, and so my test problem space can be limited. However, it is not always clear how to classify something as embedded, mobile, smart, web or IoT for each individual device-software.
What this expanding and inter-mixed continuum means to software testers is potentially interesting. Once upon a time, I focused my testing on the kind of bugs and approaches that were common to my embedded software device world. I could focus my tests. I did not worry about common, big data (bases), and user “feelings.” Now on some of my embedded testing I need to test these areas too. My test problem space is getting larger. Likewise, testers of PC systems did not use to worry about battery life, movement in environments, and signal drop outs. Now, maybe they should when they move to tablets mobile-smart phone apps, and even IoT. Testers in many cases should not limit their test problem space as they are saying above.
We, as testers, cannot stop learning and practicing our skills. We should not limit our test technologies to just test automation, exploratory, or human scripted testing. We should have books on attack testing, lessons learned, exploratory testing, classic systematic testing, and many others including books on programming, software engineering, art and general concepts. I have several hundred books in the IT space in my library, as well as books on philosophy, engineering, science, art, and creative thinking. I have historic standards, guides, and references and still, my library and knowledge are woefully incomplete. I do not know enough.
I will have more to say on IoT and the changing environments of software testing in the future. Catch me here.

Mobile/IoT: When is it a bug and when is it an improvement?

Last week the Chevy Volt care under went a massive recall to update the software (you can google on that). It seems they want to issue a software update to shut the car off after about 1.5 hours, because the car would be “running” (off batteries at first and the the small gas motor) and the user could miss it because the car is “to quite”.

Now to their credit, they do have warnings that sound when the car is left running and not moving for some period of time. This is good, but there still have been cases where the warnings were ignored, the car ran, and CO2 built up in a garage (this is bad). So they added a new feature to “fail safe” by shutting the engine down totally after the warnings and time period. Easy fix.

Those of us working with systems-hardware have long had the joke “we will fix that hardware-system problem in the software”. This is the great thing about software. We can do this.

But I was left wondering. Did testers report the “missing feature” years ago, but missed the CO2 build as an effect? Did the system have a comprehensive risk/failure modes effects analysis (FMEA) done? Many embedded/IoT system do have these very detailed analysis (my book talks about these).

Now many software developers will argue they met requirements and so the new feature was not to fix a bug, but to improve the system. I argue, there is cost with the recall that might have paid for a more comprehensive system-software-test FMEA. What else did they miss?

Security, Insecurity, and the Single Tester

Many of us have been writing and talking about the need for better security throughout the IT industry. I focus on Mobile, Embedded, and IoT, but security spans all types of software and systems. Now with all of the news reports, industry incidents, and government involvement (ref.: Obama’s speech this last week), there is recognition and expanding action but what should be next for security testing?
I don’t think there will be consensus. For example, some say better software development including good testing. Others want to build walls, cyber safeguards, and have cyber offense just like with the traditional military and police actions. Yet more want rules, regulations, and penalties for both sides of security (the companies and the bad guy hackers).
As in most complex things, and cyber IT certainly is complex, the answer will be yes to all of the above and many more. A key step has happened. Most all of the users and interested parties now seem to be aware of the problems. When people find out that I am involved in security testing, they ask “should we be scared of IT security” and I answer “yes and you should be more scared”.
There are actions being taken and many things to be considered. We have cyber security warriors in some places. I have written about the need to grow the number of cyber-tester security warriors. And while I realize that the testing community will not agree on my list of near term actions to take to achieve cyber-security warriors (experts), I think there are many possible paths toward becoming better cyber security test warriors. The general actions I think should be considered include:
1. Learn more about general software testing from books, classes and for some , consider certifications (ISTQB is not supported as a good idea by everyone but it can be an early step to gaining test KNOWLEDGE)
2. Practice tester skills (see AST skill list – TBD web site) and become an experienced and ever improving skilled tester (I have been practicing testing for 35 years, and still have more to learn)
3. Learn more about the hacker’s world and their skills (This means we need to become “good” hacker/crackers to be able to “fight fire with fire”)
4. Understand and work with government and industry regulations and standards (yes I know many of you don’t believe in them, but standards will be put in place and get abused, so we should work to make standards and policies as acceptable as possible and then know how to use them correctly)
5. Know more about how to better develop software including security and other qualities (this means we must be more that testers, e.g. be software and system people)
6. Understand risk-based testing driven by the integrity level of the software (IEEE 1012 and ISO 29119, and again, I know that some people dislike these standards, but they represent a low level starting point from which to tailor processes, techniques, and documents)
7. Be better practiced in testing non-functional elements of the software-system including quality testing, model-driven testing, math-based test techniques, and attack-based exploratory testing (these approaches are often not understood or poorly used “tools” of our industry, and testers should have a great many test techniques that they can use beyond just checking requirements).
I know quite a few software people and testers feel that many of these ideals are “wrong” and even toxic. I hear that software and testing are arts, and we need more creativeness. True, but software-testing is much more. I hear that we need more rigor using math or models in engineering development and test. True, in part, but software is more than just science and engineering. I hear we don’t need regulations for our test industry because it is “too young”, or because that restricts free thinking and lets managers hide from the “hard work of testing” by claiming “dumb” conformance to meaningless documents. There is truth in these statements too, but every discipline started some place (read the history of the early books on medical anatomy), and having some regulations can force better development behaviors than the current “open season” in the wild wild west of software security. For example, clean air regulations have helped to keep the air clean in many USA cities in my life time). We should not too quickly dismiss standards.
We will never solve all aspects of cyber security. Just like security in everyday life, we will need the police, military, artists, and engineers. This has not changed for thousands of years. Cyber has just given the bad players a new environment in which to do crimes, make war, and do evil things. Most of us would not trade the benefits that IT gives us, so we must deal with the costs that cyber brings. Security is one of those costs.