Developers: 4 Strategies To Maintain Application Security In 2019

In 2019, software developers face a unique set of security challenges. When a company is bound by regulations like HIPAA, for example, all software that company uses must be compliant with data storage regulations. Software developers can be held liable for a data breach simply for managing a database containing patient data.

Stopping internal threats isn’t your job, but it is your job to secure your applications to the best extent possible. The following strategies will help:

  1. Focus on security first

Security is tough, but it’s expected and required in some cases. Don’t abandon security features just to get your product shipped quickly. If you can’t build certain aspects of security into your product, let your end users know what they need to do make up for it on their end.

  1. Educate your clients

No programming language is perfectly secure, and developers can only do so much. Your end users need to know how to manage security risks on their end. Educate your clients on the types of attacks most common to their particular software stack. Help them come up with a security protocol at least on a basic level. For example, if your client insists on using ColdFusion applications, they need to know the vulnerabilities associated with SQL injection.

You could pass it off to the client to come up with their security plan, but that’s not fair. The client should be made aware of the risks. If they don’t want your help, that’s their choice.

Also, don’t force a company to compromise their security just to install your application across their network of computers. Creating your install packages with InstallAware allows employees to install applications without being granted global administrative permissions. As long as the system admin activates the “always install with elevated privileges” policy, installations requiring admin rights will be successful.

  1. When asked to develop for the medical field, tread carefully

In a perfect world, you could develop a billing system for a medical company, hand it off, and call it a day. In the real world, being the developer means you’re going to be called each time there’s a problem. You’ll probably need to install all updates and patches yourself, which means you’ll be interacting with sensitive information, making you liable in a data breach.

Before developing software for the medical field, familiarize yourself with HIPAA requirements for software posted by DevOps.com:

  • User authorization
  • Controlled access
  • Authorization monitoring
  • Data backup
  • Remediation plan
  • Emergency mode
  • Automatic log off
  • Data encryption/decryption

Fines for HIPAA violations range from $100 to $50,000 per violation (per record) and fines exist on four tiers. In 2016, organizations were hit with more than $23.5 million in fines. In 2018, fines reached more than $28.5 million.

If you’re unfamiliar with meeting HIPAA requirements, it’s best to pass on the project and let someone else handle it.

  1. Stop using unsecure programming languages

UpGuard reported on a WhiteHat Security study that found the three most popular programming languages also have the most vulnerabilities:

  • .NET (31%)
  • Java (28%)
  • ASP (15%)

The largest source of vulnerability is cross-site scripting. Attacks involve injecting client-side scripts into webpages to bypass access controls. For .NET applications, the biggest vulnerability is the accidental revelation of sensitive data like comments and error messages.

The same study found that ColdFusion had the highest number of SQL injection vulnerabilities.

Casually programming in unsecure languages may have been fun in the 90s, but it’s not going to work in 2019.

Don’t count on consumers to protect themselves

According to Pew Research Center studies, many Americans worry about the security of personal data collected by third parties, but fail to follow digital security best practices in their personal lives. Until consumers start protecting themselves, developers need to go to extra lengths to build security into their products.

Google’s Gmail, for example, provides built-in encryption, although it’s not extensive and doesn’t meet most regulatory requirements. Years ago, Google announced their E2EMail project – a Chrome browser extension that would provide end-to-end encryption for all Gmail users. It was a great idea that turned out to be too difficult and is now an open source project on GitHub.

Users can get third-party encryption software that will provide end-to-end encryption, but they’re not likely to do so. No matter how many data breaches are announced in the media, consumers are outraged but not willing to take the steps to protect themselves. Consumers believe it’s not their job.

Yes, consumers should be more diligent and not sign into email or bank accounts while using public Wi-Fi, but they’re going to do it anyway.  You can’t force people to follow security best practices in their personal lives, but you can reduce the number of potential exploits by making software as secure as possible.

About Author