Post Shmoocon 2020

Featured

                Over the last weekend, I was able to go to Shmoocon for the third year in a row. This year, however, I decided not to work on CTF challenges the whole time while “Listening” to talks. Instead, I was present at the talks, and it turns out, Shmoocon has some great discussions. While most people at Shmoocon typically try to take a lot of technical advice and come back in full force ready to tackle new exploits or something of that nature, this year I left in a more inquisitive than fired up ready to “pwn.” In this post, I hope to explain four topics that have made me think a lot since the weekend and things that maybe we don’t think of enough in the security community or the dev community.

The user isn’t stupid

Every year at Shmoo, one of the founders of Shmoocon, Bruce Potter, gives a rant about the industry and changes he wishes to make in the industry. Every year, I think I have heard Bruce talk about the same thing the users aren’t stupid. We need to start building better tools to protect the user. This got me thinking a lot about what we do in the security industry, I see this done in many areas of business, but we often call those who use bad passwords dumb, or we think someone is stupid because they can’t figure out things in our programs that we wrote. Often the user is not actually at fault. It is our fault for not educating the masses.

This includes any area of business really, I have seen people call clients stupid for not being able to do the work the company does, although if they did know how to do it, we wouldn’t have jobs. I remember a talk with a friend of mine recently who is in User Experience, and he said: “Developers and IT People are not great at designing tools, they work great usually, but internal tools are always a pain to work in.” And if we are honest, it is true. While some tools that we design function great for our needs, 80% of the staff is going to be confused by the buttons or naming convention or something else. We think differently than most. That is why your CEO doesn’t have 2FA and uses the same password for Facebook as they use for their corporate email. They haven’t been trained to think like a hacker, but they have been trained to make deals and sell an organization.

This talk made me think more about why I want to be in this industry. I think most people want to be in the industry for one of two reasons, they want to break stuff and not get arrested, or they want to break things and protect users. I hold firm that my purpose is to protect others, breaking stuff is fun, I find trying to figure out solutions to challenges exciting, but at the end of the day, I work in this industry to help others know how to protect themselves from people like me, who want to harm them.

Bruce ended by encouraging us to get better at what we are currently doing, instead of trying to learn everything, which leads to the next takeaway.

I need to stop working more and start working smarter

                Bruce also noted that the security industry has a hard time selling our industry because we also tell everyone they must work off the clock consistently to learn. This leads to a massive amount of burnout and fatigue and makes us irritable with future generations. The security community prides itself on people who work 60 hours a week, with at least 20 of that being off the clock on our own time. As a leader, it made me think more about learning opportunities for the employees around me. What can I do as a leader to provide more learning opportunities for the staff at my organization without forcing them to learn on their dime?

                I must admit this is a challenging endeavor, first because I must practice what I preach. I am the master of always working on something or some goal. Secondly, because I work for a small organization and it feels like there is always too much work to do. Four years ago, when I started in IT, I learned a lot of things by doing, I was pushed to learn new things because if I didn’t learn it, it would not get done. Luckily, my bosses gave me the time I needed. Maybe that is where it starts, giving others small projects that they can learn on, and that pushes them to do better. Training sessions and mentoring from the leadership will also help mold employees. The security community is small, and sometimes it seems that it is difficult to learn. Leaders need to start mentoring those under us (and I am still very new to the field also, I need mentoring myself) and give them the chance to learn when they are on the clock, not off the clock.

What was a great feature, could turn into a bug

                During the opening day, I got to see a talk by Jonathan Leitschuh, who is mostly known for finding the Zoom Zero-day in 2019. What I found most interesting about Jonathan’s speech was his interaction with Zoom developers. They insisted that the vulnerabilities that Jonathan found were features that made Zoom attractive to their clients. While the webserver was a feature in the Zoom product that made it easier and more efficient to use the product, it was creating a backdoor that put customers at risk.

                This could be true about just about any application. Most developers did not learn security, and honestly don’t think like hackers most of the time. Their jobs, in most cases, are to get a product out as fast as possible that meets the needs of the organization. Many of these developers are being pushed to do things much quicker. I have been pushed to get products out the door in my organization very quickly, and it has prevented me from doing the job that I would have liked. I’m sure most penetration testers could tell you that they wish they had longer to poke around for holes in a network as well. The point is, in the security community, as well as in all industries, we need to learn to adapt and understand that many of the “features” few are looking for in things, can create opportunities for holes that we weren’t expecting. As a security person, I need to understand that not everyone will think the way I think and that I need to be understanding of that and use it as a learning experience and not as a whipping stick. As a developer, I need to understand the scope of a program I am writing and think about the holes that could be opened because of a feature that makes things convenient.

Security isn’t just CVEs it’s also misconfigurations

                Lastly, I enjoyed a talk by Mark Manning on Kubernetes security for Pentesters. This was an excellent talk primarily because Mark notes that his talk was not on Zero Days in Kubernetes, but it was on misconfigurations. This is something I struggle with myself; I think we often focus on zero-day attacks or CVEs, but we don’t focus enough on proper configuration. Every conference seems to be focused on zero-days or new vulnerabilities. I rarely ever see a talk where a person shows how they pwned a system because it was not correctly configured, yet that seems to be the nature of a large number of attacks. The Capital One data breach did not happen because of a vulnerability that was disclosed and unpatched, or some new zero-day that was dropped. It happened because one of Capital One’s databases were left exposed to the internet. It has shifted my train of thought. There are likely three times as many systems that are vulnerable to attack over the internet because of misconfigurations as there are those that are vulnerable because of a zero-day.

Changing Threats in Healthcare

                October is cybersecurity awareness month. All month long, many organizations have made it a priority to spread awareness to the members of their organization as well as disseminating information to the public on how to protect themselves.

                Over the last decade, I am sure you have heard about your healthcare data and the importance of HIPPA to healthcare organizations. Healthcare data is essential to protect; every person reading this article right now plays an integral part in securing this type of data. As a consumer of healthcare products and apps, it is necessary to focus on protecting your data. Working in a healthcare company puts a unique amount of risk to threats we face. 

                Over the last four years, the threats have changed significantly. According to Steve Mansfield-
Devine (2017), the threat landscape has changed from stealing and selling patient data to attacking healthcare organizations using ransomware techniques. Ransomware is a malware type that takes over a computer’s records and holds it ransom until the owner of the data pays the ransom. Most of the time, this type of attack uses encryption and encrypts the files of the computer and will not release the decryption key until the ransom is paid. You may have heard of this in the news recently. Personal Identifiable Information or PII is not worth as much as it used to be because of the massive amounts of data breaches that have occurred over the last few years. Stolen health data was worth $75.00 – $100.00 per patient in 2015, now that price has dipped to $25.00 to $50.00 per patient record on the dark web (Mansfield-Devine, 2017).  Ransomware attack averages are much higher. Healthcare organizations cannot afford to have downtime for patient safety and financial reasons, which means they are twice as likely to pay the ransom. The Crytpolocker attack that happened in 2013 made the attackers $30,000,000 in 3 months (Slayton, 2018). 

 Attackers are financially motivated to attack healthcare organizations, who cannot afford downtime. Many small healthcare organizations could not afford to pay the ransoms that the attackers are requesting. Even if they were able to, HIPPA regulators could fine the companies for each patient record that was attacked, which could bankrupt an organization. For most, this should be incentive enough to ensure the safety of files. Some of the ransomware attacks are linked to Nations such as North Korea or related to criminal and terrorist organizations. 

                Many of the ransomware attacks have stemmed from phishing and spamming campaigns (Mansfield-Devine, 2017). Weaknesses in protocols, such as the SMB protocol has led to ransomware spreading and infecting entire networks. It is crucial to understand how these types of infections spread and to be wary about clicking on links. Many of the attacks come from what is called spear-phishing. In spear phishing, the attacker crafts an email from a client or member of the organization, and requests that the user opens a malicious pdf or click a link that downloads a malicious file onto their computer. The malicious file allows the attacker to control the victim’s computer. As healthcare professionals, we are all responsible for protecting the data integrity of our clients as well as protecting our data. Ensure you are not clicking on any links that are being sent to your email unless you can confirm the validity of the email. If you are unsure, do not hesitate to ask, no one will be angry with you for ensuring the security of the organization. It is the responsibility of everyone to protect company assets as well as protecting your data.

Works Cited.

Mansfield-Devine, S. (2017). Leaks and Ransoms- The Key threats to Healthcare Organisations. Network Security. June(2017).

Slayton, T. (2018). Ransomware: The Virus Attacking the Healthcare Industry. Journal of Legal Medicine. Doi: 10.1080/01947648.2018.1473186

The Art of Deceptive Magic

Recently I was listening to a podcast, Hacking Humans, by the team at the Cyberwire. Joe Carrigan was speaking about an article he had recently read. The article was about a study from eye movements in illusions. The specific illusion in question was an illusion where the magician throws the ball in the air several times, the last time the ball is thrown the magician catches the ball, yet pretends the ball is still being thrown. The person watching does not realize this then magically the ball disappears. When people are asked what happened to the ball or when the ball disappeared, they swear the ball was still in motion even after the ball has been laid in the lap of the magician.

Why does the audience still swear they see the ball even though it has disappeared? The article explains that the audience members are watching the eyes of the magician and not the ball, so when the magician’s eyes continue to show the movement of the ball, the audience is convinced that the ball is still in motion. However, when the audience members know the trick, they are no longer tricked.

At this point in this blog post, you, the reader, are probably wondering what this has to do with cybersecurity? Social engineering is a practice in information security in which a malicious actor uses psychology to manipulate and deceive users into revealing sensitive information. These types of attacks usually distribute malware. A social engineering attack can be carried out in multiple ways, and in many cases, have multiple layers. If you were to read Kevin Mitnick’s book, “The Art of Deception,” you would hear of many attacks that used many hacking attempts that use multiple layers of social engineering. It could be calling a company, pretending to be from another branch to get a company directory, then from there working up to get in touch with the CEO, and from there sending out an email with a malicious link and so forth. The point is there are many ways that social engineering attacks can be played out, but they all use deception.

The CIA had a guide for Trickery and Deception from the Cold-War that taught Agents how to deceive others in the field. One of the ways it taught to poison a drink was to light the other person’s cigarette and drop the poison in their drink. The reason this works is that the person will not take their eyes off of the fire. The social engineer would do the same thing; they deceive you by using fear of what is right in front of you, to exploit another hole. Social engineering attacks have the main purpose of taking your eyes off one thing, while a weak spot is exploited.

Social engineering awareness training is very important as can be seen from the example from the magician. Once the audience members understand the trick, they are never tricked again. A good trainer would know of many different ways in which social engineering attacks are carried out and would prevent them by giving explanations and testing employees. The training must be done in a way that the employees would understand the importance, and would be something that employees would want to do. Once they learn how to spot tricks, the company becomes safer.