So I wrote this and got it published a month ago.
I was hoping for some responses from testers wanting to become security test warriors. I know there are a few and it is a specialization, but no comments?? Disappointing.
Are we as testers being left behind by the hacker bad guys? It seems like companies care (some more than others) and care more when they get hacked (see Target and how money they are spending on chip card readers). So, the IT/PC/Web community are being active (to some degree)
Mobile and embedded seem to be less concerned. Maybe M&S are just trying to get a product out, and we’ll care as we get hacked more (something about closing the barn door after the horse are gone?). I have written and presented on M&S security before, but where are my warriors (I’m to old to be warrior).
I wrote on this when it came out and this article link is dated, but it had a different “slant” on things that my first posting:
I note in this article there were comments about standards and not following them, as well as references to the earlier NASA report claiming there was not a software issue.
Now I believe standards have a place, but still need thinking. Some people use them so they don’t have to think about something like testing. Not good. Also, as far as “reports” and investigations go, they need to be subjected to thinking and the “scientific method” in which we question any report to see if the information is incomplete. This is a fact of life for testers (just because your first 100 tests work does not mean that a bug is not there).
So I write a lot these days about skills in testing (building them), using many different approaches to testing, and watching for things like bias. Do Toyota or other companies have bias and need to improve their testing skills?
Probably as bias is a fact of being human and every tester I know can improve their skills.
What can we as tester do about it?
Test more and practice our skill?
Okay, I am behind on posting about embedded/mobile software security concerns, in part because the number of interesting reports has become almost a flood of “issued”. For example this week, you should read about SCADE systems (something I talk about in my attack testing book) at:
The mobile and embedded industry and their testers seem to be in the mode of “let the bad guys find the holes”. This is the classic closing the barn door after the horse are out. It is concerning to many of us. I really don’t want my SCADE controlled power system to be hacked and crash.
I promise I will post more of these “pointers” for embedded/mobile software security testers to have in the “horror” story play book. I just need more time to keep up with the flood.
It seems all the auto companies are issuing a larger number of recalls these days, likely with great expense and bad PR. Maybe they see it that when you are under a waterfall, you don’t notice a little rain.
I am still left wondering if better testing would have helped. I have to believe it would since that is my business, but maybe I am missing some of the picture.
I do see companies out in my extreme environment running system tests on their cars, so maybe there is hope. I’ll be doing a posting soon with some pictures of what kinds of testing they are doing and how I know this. In the mean time, try to stay dry.
P.S, if anyone reading this is at STPcon next week in New Orleans, look me up as I am there.
Toyota seems ready for a change in PR and engineering. Great.
I have also heard Toyota hired or “added” 1300 (or more) quality engineers to help improve. Good move (maybe). They recognize an issues exists and are trying to change.
So let’s throw bodies at it. However, do those people know quality, testing, verification, validation, engineering, or standards? I don’t know. Do they know software, systems, and bugs?
I spent years learning about bugs and testing. I wrote a book and many articles. BUT I am still learning about mobile and embedded bugs.
I hope some of the 1000s of new engineers take some time to learn, build skills, and think.
For those working in mobile apps and security (and everyone doing apps and mobile software should be worried about security), besides my book, Better Software Magazine just published an article of mine as the cover piece. See
Click to access V16I2.pdf
I am also looking to do some presentations on this subject as I continue my own personal research and learning on security. It is a never ending quest. I as learn more I’ll do postings on the blog here and if anyone has other good info on mobile app security testing, I am always looking for pointers. The bad guys are moving fast and we testers need to try to keep up or there will be more posting and news stories like I report here.
Again, ask yourself, do you want your app/software/company on the nightly news for security bugs? I THINK NOT
Hacking continues in the mobile embedded world. See
Now one can wonder about the “reality” and importance of this hack, but it shows a continuing trend. As I collect information like this, the mobile/embedded security attacks of my book continue to seem important even as they need to evolve to address the new threats.
As cars and devices grow “smarter” this type of situation will continue to grow, and yet I most mobile/embedded testers seem to focus on verification of requirements. They are happy in the ignorance. Here is another data point to add to your “smarts”.
So the news has had some interesting public bugs. For example GM trucks had software bugs, which caused fires and a big recall/software update:
Let’s think about conceptual costs resulting from this bug:
1) So this makes the news = bad PR and maybe sales impact loses?
2) Cost of the recall notice = money cost
3) Time in the shop and updating the software = more money cost to dealers
4) Maybe some fires and resulting law suites = could be a lot of money cost to company.
I don’t know why testing did not find this bug or how much it might have cost in testing with a “break it” view point as my book, Software Test Attacks to Break Mobile and Embedded Devices” talks about, but I might be willing to bet the cost of testing would have been less than these conceptual costs from when the bug is found in the field.
Teams and testers need to think of these conceptual costs when considering the ROI of testing. Places I worked looked at the money spent on testing (millions of dollars in some cases) as cheaper than losing much more in these conceptual costs. Too many product managers only look at the cost of development (including seeing test as tax which should be minimized) and do not consider the costs of bad devices in deployment. It is some else loss to pay these “in field” costs. This is a short sighted view of the embedded device space.
So here is a link to a nice summary about the WHOSE conference. http://www.mostly-testing.co.uk/2013/12/whose-workshop-on-self-education-in.html
It is also the end of year. So what is new and what is old?
Well, to many of us, much of the thinking on testing is not really all that new. We still have many of the same “old” problems and situations on software projects. The names (Agile, cloud, big data, etc) may be different. The amounts and natures of the code/software may be different, but to some of us “old timers” many of the basic issues are rooted in human factors that have not changed in over 30 years.
So maybe next year I’ll consider writing about these human factors and what, if anything, can be done.
There are some new things. I think we are seeing a refinement in “old ideas” that are worth the time for a tester to note. I might list new things such as: certifications vs skill; Verification, testing, validation, checking concepts; test standards for the world have been released, e.g. ISO/IEEE29119 and IEEE1012; security, security, and lack of security; and words on nightly news of “there was not much testing done” as an issue for major new projects (left unnamed).
So I wish anyone reading, a happy new year and much interest in testing.
A few weeks ago I attended a meeting where we started work on defining the skills of a context driven tester. There will be some interesting tester skills listed next year when the skills list is made available via AST (hopefully).
But I got wondering, given ISTQB, IEEE, and other list of skill and knowledge in software and testing, how much are they different or the same?
If they are 90% the same, should a tester focus on the common parts or should we focus on differences?
Also, is it the 10% that makes a person a context driven tester with skill or some other percentage?
I don’t know.
It might be the view point that make a school of testing and not the 10%. I bounce in an out of various conferences and “schools” of testing. Each has a view point which at times I like (or not). I guess that makes me a context school person.
So look forward to the context tester skill list next year. It will be interesting to compare and advance the field of software testing.