API Security News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API security conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is security (or not) their API infrastructure, and addressing the biggest problem we face online today.

More Investment In API Security

I’m getting some investment from ElasticBeam to turn up the volume on my API security research, so I will be telling more stories on the subject, and publishing an industry guide, as well as a white paper in coming weeks. I want my API security to become a first class area of my API research, along side definitions, design, deployment, management, monitoring, testing, and performance.

Much of my API security research is built on top of OWASP’s hard work, but honestly I haven’t gotten very far along in it. I’ve managed to curated a handful of companies who I’ve come across in my research, but haven’t had time to dive in deeper, or fully process all the news I’ve curated there. It takes time to stay in tune with what companies are up to, and I’m thankful for ElasticBeam’s investment to help me pay the bills while I’m heads down doing this work.

I am hoping that my API security research will also help encourage you to invest more into API security. As I do with my other partners, I will find ways of weaving ElasticBeam into the conversation, but my stories, guides, and white papers will be about the wider space–which Elastic Beam fits in. I’m hoping they’ll compliment Runscope as my partner when it comes to monitoring, testing, and performance (see how I did that, I worked Runscope in too), adding the security dimension to these critical layers of operating a reliable API.

One thing that attracted me to conversations with ElasticBeam was that they were developing a solution that could augment existing API management solutions like 3Scale and Amazon Web Services. I’ll have a talk with the team about integrating with Tyk, DreamFactory, and Restlet–my other partners. Damn I’m good. I got them all in here! Seriously though, I’m thankful for these partners investing in what I do, and helping me tell more stories on the blog, and produce more guides and papers.

I feel like 3Scale has long represented what I’ve been doing over seven years–a focus on API management. Restlet, DreamFactory, and Tyk represent the maturing and evolution of this layer. While Runscope really reflects the awareness that has been generated at the API management layer, but evolving to serve not just API providers, but also API consumers. I feel like ElasticBeam reflects the next critical piece of the puzzle, moving the API security conversation beyond the authentication and rate limiting of API management, or limiting the known threats, and making it about identifying the unknown threats our API infrastructure faces today.

Opportunity To Develop A Threat Intelligence Aggregation API

I came across this valuable list of threat intelligence resources and think that the section on information sources should be aggregated and provided as a single threat intelligence API. When I come across valuable information repos like this my first impulse is to go through them, standardize and upload as JSON and YAML to Github, making all of this data forkable, and available via an API.

Of course if I responded to every impulse like this I would never get any of my normal work done, and actually pay my bills. A second option for me is to put things out there publicly in hopes that a) someone will pay me to do the work, or b) someone else who has more time, and the rent paid will tackle the work. With this in mind, this list of sources should be standardized, and publish to Github and as an API:


Ideally, [each source on this list]( would be publishing a forkable version of their data on Github and/or deploying a simple web API, but alas it isn't the world we live in. Part of the process to standardardize and normalize the threat intelligence from all of these source would be to reach out to each provider, and take their temperature regarding working together to improve the data source by itself, as well as part of an aggregated set of data and API sources.

Similar to what I'm trying to do across many of the top business sectors being impacted by APIs, we need to to work aggregating all the existing sources of threat intelligence, and begin identifying a common schema that any new player could adopt. We need an open data schema, API definition, as well as suite of open source server and client tooling to emerge, if we are going to stay ahead of the cybersecurity storm that has engulfed us, and will continue to surround us until we work together to push it back.

Alexa Top 1 Million sites Probable Whitelist of the top 1 Million sites from Amazon(Alexa).
APT Groups and Operations A spreadsheet containing information and intelligence about APT groups, operations and tactics.
AutoShun A public service offering at most 2000 malicious IPs and some more resources.
BGP Ranking Ranking of ASNs having the most malicious content.
Botnet Tracker Tracks several active botnets.
BruteForceBlocker BruteForceBlocker is a perl script that monitors a server's sshd logs and identifies brute force attacks, which it then uses to automatically configure firewall blocking rules and submit those IPs back to the project site,
C&C Tracker A feed of known, active and non-sinkholed C&C IP addresses, from Bambenek Consulting.
CI Army List A subset of the commercial CINS Score list, focused on poorly rated IPs that are not currently present on other threatlists.
Cisco Umbrella Probable Whitelist of the top 1 million sites resolved by Cisco Umbrella (was OpenDNS).
Critical Stack Intel The free threat intelligence parsed and aggregated by Critical Stack is ready for use in any Bro production system. You can specify which feeds you trust and want to ingest.
C1fApp C1fApp is a threat feed aggregation application, providing a single feed, both Open Source and private. Provides statistics dashboard, open API for search and is been running for a few years now. Searches are on historical data.
Cymon Cymon is an aggregator of indicators from multiple sources with history, so you have a single interface to multiple threat feeds. It also provides an API to search a database along with a pretty web interface.
Deepviz Threat Intel Deepviz offers a sandbox for analyzing malware and has an API available with threat intelligence harvested from the sandbox.
Emerging Threats Firewall Rules A collection of rules for several types of firewalls, including iptables, PF and PIX.
Emerging Threats IDS Rules A collection of Snort and Suricata rules files that can be used for alerting or blocking.
ExoneraTor The ExoneraTor service maintains a database of IP addresses that have been part of the Tor network. It answers the question whether there was a Tor relay running on a given IP address on a given date.
Exploitalert Listing of latest exploits released.
ZeuS Tracker The Feodo Tracker tracks the Feodo trojan.
FireHOL IP Lists 400+ publicly available IP Feeds analysed to document their evolution, geo-map, age of IPs, retention policy, overlaps. The site focuses on cyber crime (attacks, abuse, malware).
FraudGuard FraudGuard is a service designed to provide an easy way to validate usage by continuously collecting and analyzing real-time internet traffic.
Hail a TAXII Hail a is a repository of Open Source Cyber Threat Intelligence feeds in STIX format. They offer several feeds, including some that are listed here already in a different format, like the Emerging Threats rules and PhishTank feeds.
I-Blocklist I-Blocklist maintains several types of lists containing IP addresses belonging to various categories. Some of these main categories include countries, ISPs and organizations. Other lists include web attacks, TOR, spyware and proxies. Many are free to use, and available in various formats.
Majestic Million Probable Whitelist of the top 1 million web sites, as ranked by Majestic. Sites are ordered by the number of referring subnets. More about the ranking can be found on their blog. The MalShare Project is a public malware repository that provides researchers free access to samples. The DNS-BH project creates and maintains a listing of domains that are known to be used to propagate malware and spyware. These can be used for detection as well as prevention (sinkholing DNS requests). Metadefender Cloud Threat Intelligence Feeds contains top new malware hash signatures, including MD5, SHA1, and SHA256. These new malicious hashes have been spotted by Metadefender Cloud within the last 24 hours. The feeds are updated daily with newly detected and reported malware to provide actionable and timely threat intelligence.
NormShield Services NormShield Services provide thousands of domain information (including whois information) that potential phishing attacks may come from. Breach and blacklist services also available. There is free sign up for public services for continuous monitoring. A feed of IP addresses found to be attempting brute-force logins on services such as SSH, FTP, IMAP and phpMyAdmin and other web applications.
OpenPhish Feeds OpenPhish receives URLs from multiple streams and analyzes them using its proprietary phishing detection algorithms. There are free and commercial offerings available.
PhishTank PhishTank delivers a list of suspected phishing URLs. Their data comes from human reports, but they also ingest external feeds where possible. It's a free service, but registering for an API key is sometimes necessary.
Ransomware Tracker The Ransomware Tracker by tracks and monitors the status of domain names, IP addresses and URLs that are associated with Ransomware, such as Botnet C∓C servers, distribution sites and payment sites.
SANS ICS Suspicious Domains The Suspicious Domains Threat Lists by SANS ICS tracks suspicious domains. It offers 3 lists categorized as either high, medium or low sensitivity, where the high sensitivity list has fewer false positives, whereas the low sensitivty list with more false positives. There is also an approved whitelist of domains.
Finally, there is a suggested IP blocklist from DShield.
signature-base A database of signatures used in other tools by Neo23x0.
The Spamhaus project The Spamhaus Project contains multiple threatlists associated with spam and malware activity.
SSL Blacklist SSL Blacklist (SSLBL) is a project maintained by The goal is to provide a list of "bad" SSL certificates identified by to be associated with malware or botnet activities. SSLBL relies on SHA1 fingerprints of malicious SSL certificates and offers various blacklists
Statvoo Top 1 Million Sites Probable Whitelist of the top 1 million web sites, as ranked by Statvoo.
Strongarm, by Percipient Networks Strongarm is a DNS blackhole that takes action on indicators of compromise by blocking malware command and control. Strongarm aggregates free indicator feeds, integrates with commercial feeds, utilizes Percipient's IOC feeds, and operates DNS resolvers and APIs for you to use to protect your network and business. Strongarm is free for personal use.
Talos Aspis Project Aspis is a closed collaboration between Talos and hosting providers to identify and deter major threat actors. Talos shares its expertise, resources, and capabilities including network and system forensics, reverse engineering, and threat intelligence at no cost to the provider.
Threatglass An online tool for sharing, browsing and analyzing web-based malware. Threatglass allows users to graphically browse website infections by viewing screenshots of the stages of infection, as well as by analyzing network characteristics such as host relationships and packet captures.
ThreatMiner ThreatMiner has been created to free analysts from data collection and to provide them a portal on which they can carry out their tasks, from reading reports to pivoting and data enrichment. The emphasis of ThreatMiner isn't just about indicators of compromise (IoC) but also to provide analysts with contextual information related to the IoC they are looking at.
VirusShare is a repository of malware samples to provide security researchers, incident responders, forensic analysts, and the morbidly curious access to samples of malicious code. Access to the site is granted via invitation only.
Yara-Rules An open source repository with different Yara signatures that are compiled, classified and kept as up to date as possible.
ZeuS Tracker The ZeuS Tracker by tracks ZeuS Command & Control servers (hosts) around the world and provides you a domain- and a IP-blocklist.

Does Your API Sandbox Have Malicious Users?

I have been going through my API virtualization research, expanding the number of companies I’m paying attention to, and taking a look at industry specific sandboxes, mock APIs, and other approaches to virtualizing APIs, and the data and content they serve up. I’m playing around with some banking API sandboxes, getting familiar with PSD2, and learning about how banks are approaches their API virtualization–providing me with an example within a heavily regulated industry.

AS I’m looking through Open Bank Project’s PSD2 Sandbox, and playing with services that are targeting the banking industry with sandbox solution, I find myself thinking about Netflix’s Chaos Monkey, which is “a resiliency tool that helps applications tolerate random instance failures.” Now I am wondering if there are any API sandboxes out there that have simulated threats built in, pushing developers to build more stable and secure applications with API resources.

There are threat detection solutions that have APIs, and some interesting sandboxes for analyzing malware that have APIs, but I don’t find any API sandboxes that just have general threats available in them. If you know of any sandboxes that provide simulations, or sample data, please let me know. Also if you know of any APIs that specifically provide API security threats in their sandbox environments so that developers can harden their apps–I’d love to hear more about it. I depend on my readers to let me know of the interesting details from API operations like this.

I’m on the hunt for APIs that have sandboxes that assist application developers think about the resiliancy and security of their applications built on top of an API. Eventually I’d also love to see a sandbox emerge to emerge that could help API providers think about the resiliancy and security of their APIs. I’m feeling like this aspect of API virtualization is going to become just as critical as guidance on API design best practices, but helping API operaters better understand the threats they face as API operators, and quantify them against their API in a virtualized and simulated environment that isn’t going to freak them out about the availability of their production environment.

I’ll keep an eye out for more examples of this in the wild–if you know of anything please let me know–thanks!

Making An Account Activity API The Default

I was reading an informative post about the Twitter Account Activity API, which seems like something that should be the default for ALL platforms. In today’s cyber insecure environment, we should have the option to subscribe to a handful of events regarding our account or be able to sign up for a service that can subscribe and help us make sense of our account activity.

An account activity API should be the default for ALL the platforms we depend on. There should be a wealth of certified aggregate activity services that can help us audit and understand what is going on with our platform account activity. We should be able to look at, understand, and react to the good and bad activity via our accounts. If there are applications doing things that don’t make sense, we should be able to suspend access, until more is understood.

The Twitter Account Activity API Callback request contains three level of details:

  • direct_message_events: An array of Direct Message Event objects.
  • users: An object containing hydrated user objects keyed by user ID.
  • apps: An object containing hydrated application objects keyed by app ID.

The Twitter Account Activity API provides a nice blueprint other API providers can follow when thinking about their own solution. While the schema returned will vary between providers, it seems like the API definition, and the webhook driven process can be standardized and shared across providers.

The Twitter Account Activity API is in beta, but I will keep an eye on it. Now that I have the concept in my head, I’ll also look for this type of API available on other platforms. It is one of those ideas I think will be sticky, and if I can kick up enough dust, maybe other API providers will consider. I would love to have this level of control over my accounts, and it is also good to see Twitter still rolling out new APIs like this.

Making An Account Activity API The Default

I was reading an informative post about the Twitter Account Activity API, which seems like something that should be the default for ALL platforms. In today’s cyber insecure environment, we should have the option to subscribe to a handful of events regarding our account or be able to sign up for a service that can subscribe and help us make sense of our account activity.

An account activity API should be the default for ALL the platforms we depend on. There should be a wealth of certified aggregate activity services that can help us audit and understand what is going on with our platform account activity. We should be able to look at, understand, and react to the good and bad activity via our accounts. If there are applications doing things that don’t make sense, we should be able to suspend access, until more is understood.

The Twitter Account Activity API Callback request contains three level of details:

  • direct_message_events: An array of Direct Message Event objects.
  • users: An object containing hydrated user objects keyed by user ID.
  • apps: An object containing hydrated application objects keyed by app ID.

The Twitter Account Activity API provides a nice blueprint other API providers can follow when thinking about their own solution. While the schema returned will vary between providers, it seems like the API definition, and the webhook driven process can be standardized and shared across providers.

The Twitter Account Activity API is in beta, but I will keep an eye on it. Now that I have the concept in my head, I’ll also look for this type of API available on other platforms. It is one of those ideas I think will be sticky, and if I can kick up enough dust, maybe other API providers will consider. I would love to have this level of control over my accounts, and it is also good to see Twitter still rolling out new APIs like this.

Validating My API Schema As Part of My API Security Practices

I am spending more time thinking about the unknown unknowns when it comes to API security. This means thinking beyond the usual suspects when thinking about API security like encryption, API keys, and OAuth. As I monitor the API space I’m keeping an eye out for examples of what might be security concerns that not every API provider is thinking about. [I found one recently in ARS Technica, about the Federal Communication Commission (FCC) leaking the email addresses through the CC API for anyone who submitted feedback as part of any issues like the recent Net Neutrality discussion.

It sounds like the breach with the FCC API was unintentional, but it provides a pretty interesting example of a security risk that could probably be mitigated with some basic API testing and monitoring, using common services like Runscope, or Restlet Client. Adding a testing and monitoring layer to your API operations helps you look beyond just an API being up or down. You should be validating that each endpoint is returning the intended/expected schema. Just this little step of setting up a more detailed monitor can give you that brief moment to think a little more deeply about your schema–the little things like whether or not you should be sharing the email addresses of thousands, or even millions of users.

I’m working on a JSON Schema for my Open Referral Human Services API right now. I want to be able to easily validate any API as human services compliant, but I also want to be able to setup testing and monitoring, as well as security checkups by validating the schema. When it comes to human services data I want to be able to validate every field present, ensuring only what is required gets out via the API. I am validating primarily to ensure an API and the resulting schema is compliant with HSDS/A standards but seeing this breach at the FCC has reminded me that taking the time to validate the schema for our APIs can also contribute to API security–for those attacks that don’t come from outside, but from within.

Disclosure: Restlet Client and Runscope are API Evangelist partners.

I Am Working With Elastic Beam To Help Define API Security

Security is the number one concern companies, organizations, institutions, and government agencies have when I’m talking with them about doing APIs. Strangely it is also one of the most deficient, and underinvested areas of API operations. Companies are just learning to design, deploy, and manage their APIs, and monitoring, testing, and security are still on the future road map for many API providers I know.

Security is one of the important areas I’ve been trying to find more time and resources to invest into my research, and I’ve been on the hunt for interesting providers to partner with when it comes to defining security as it applies to APIs. There are a number of web and infrastructure security companies out there, but there aren’t enough that are only focused on just APIs. With the number of APIs increasing, we need more eyeballs on the problem, and even more services and tools to stay ahead of the curve.

I finally found a worthwhile partner to help explore API security as part of my regular work as the API Evangelist, a new API security focused startup called Elastic Beam. They are a brand new effort exclusively focused on API security, who are hitting all the buzzworthy areas of the tech space (Artificial Intelligence, Machine Learning, etc), while also doing one thing and doing it well–API security. ElasticBeams core products are:

I’ve seen Elastic Beam in action, and have a copy to play with as I’m exploring a variety of scenarios in alignment with my API security research. Elastic Beam is going to invest in API Evangelist so I can pay attention to API security more, producing stories, guides, and white papers, and I’m going to help translate what it is they are offering, and help keep them focused on API security, and doing it well.

Elastic Beam is live. If you want to talk to them about security let me know, or just head over to their website. Also, if there are any specific areas of my API security research you’d like me to focus on, let me know. I’ve been having weekly calls with their team and advising them on their release. We’ll see where the relationship goes, but I’m stoked to finally have someone I can partner with to focus on API security. It is an area that needs more research, discussion, as well as storytelling, to help bring awareness to API providers–this stuff takes time, and we need more smart people on the case as soon as possible.

From the time I’ve spent with them, the Elastic Beam team seems to get the problem, and have invested in the right technology. I’m looking forward to working with them to continue to map out the API security space, identify and share information on the most common threats facing API providers. Stay tuned for more about API security, thanks to the Elastic Beam team.

The Unknown Unknowns Of API Security

I am trying to wrap my head around the next steps in the evolution of API security. I am trying to help separate some of the layers of what we collectively call API security, into some specific building blocks I can reference in my storytelling. I’m ramping up my API security research as I onboard a new API service provider partner, and will have more resources to invest in the API security discussion.

Let’s start with the easy “Known Knowns”:

  • Encryption - Using encryption across all aspects of API operations from portal to base URL.
  • API Keys - Making sure everyone who is accessing an API is keying up and passing along with each request.
  • OAuth - Getting more granular with access by applying OAuth when there is access to 3rd party data and content.
  • Rate Limiting - Limiting how much of a resource any single API consumer can access using rate limiting.

These are the first things that come to mind when the topic of API security is brought up. I’d say most conversation begin with encryption, but as someone progresses on their API journey they begin to understand how they can key things up, using OAuth, rate limiting, and other aspects of API management to secure their API operations.

Next, I want to think about some the “Known Unknowns”, things like:

  • Freeloaders - People with multiple accounts, and generally aren’t healthy consumers.
  • Errors - Errors, misconfigured, and malformed requests, responses, and anything that generally gums up the gears of operations.
  • Vulnerabilities - Making sure all known software and hardware vulnerabilities are patched and communicated as part of operations.
  • Attacks - Defending against the known attacks, staying in tune with OWASP and other groups who are actively identifying, studying, and sharing attacks.

These are the things we might not totally understand or have implemented, but they are known(ish). With the right amount of resources and expertise, any API team should be able to mitigate against these areas of security. There is always a lot of work to turn the unknowns into knowns in this area, but they are doable.

Now I want to think a little about the “Unknown Unknowns”, likes like:

  • DNS - What is happening at the DNS level as it pertains to my API security?
  • Relationship - Relationship between requests, even if they are using different keys, IP addresses, and other elements–where is the overlap?
  • Threats - What information is available to me regarding new and emerging threats that I’m only beginning to see, or maybe my partners and others in same space are seeing?
  • Sharing - Are there opportunities to share information and see a bigger picture when it comes to vulnerabilities and threats?

These are just a handful of my initial thoughts on what the unknown unknowns might be. Things that are currently off the radar of my known API security practices. API management seems to dominate and in some cases terminate the API security conversation, but I have the feeling there are numerous other considerations out there that we are not seeing. I’m just looking to begin defining this new bucket of security considerations so I can keep adding items to it as I’m doing my research.

From your perspective, what are the biggest threats out there for API security that many API providers are completely unaware of? I’m looking to keep expanding on this frontline of API security and turn up the heat beneath my research while I have the resources to do so. I feel like the API security conversation has kind of stagnated after the last wave of API management growth, leaving many providers thinking they have everything covered.

Patent US9462011: Determining trustworthiness of API requests

I’m always fascinated by the patents that get filed related to APIs. Most just have an API that is part of the equation, but some of the patents are directly for an API process. It’s no secret that I’m a patent skeptic. I’m not anti-patent, I just think the process is broken when it comes to the digital world, and specifically when it comes to APIs and interoperability. Here is one of those API patents that show just how broken things are:

Title: Determining trustworthiness of API requests based on source computer applications’ responses to attack messages Number: US9462011 Owner: CA, Inc.

Abstract: A method includes receiving an application programming interface (API) request from a source computer application that is directed to a destination computer application. An attack response message that is configured to trigger operation of a defined action by the source computer application is sent to the source computer application. Deliverability of the API request to the destination computer application is controlled based on whether the attack response message triggered operation of the defined action. Related operations by API request risk assessment systems are disclosed.

I get that you might want to patent some of the secret sauce behind this process, but when it comes to APIs, and API security I’m thinking we need to keep thinks open, reusable, and interoperable. Obviously, this is just my not so the business savvy view of the world, but from my tech savvy view of how we secure APIs, patented process help nobody.

When it comes to API security you gain an advantage by providing actual solutions and doing it better than anyone else. Then you do not need to defend anything, everyone will be standing in line to buy your services because securing your APIs is critical to doing business in 2017.

In The Future Our Current Views Of Personal Data Will Be Shocking

Thinking About The Privacy And Security Of Public Data Using API Management

When I suggest modern approaches to API management be applied to public data I always get a few open data folks who push back saying that public data shouldn’t be locked up, and needs to always be publicly available–as the open data gods intended. I get it, and I agree that public data should be easily accessible, but there are increasingly a number of unintended consequences that data stewards need to consider before they publish public data to the web in 2017.

I’m going through this exercise with my recommendations and guidance for municipal 211 operators when it comes to implementing Open Referral’s Human Services Data API (HSDA). The schema and API definition centers around the storage and access to organizations, locations, services, contacts, and other key data for human services offered in any city–things like mental health resources, suicide assistance, food banks, and other things we humans need on a day to day basis.

This data should be publicly available, and easy to access. We want people to find the resources they need at the local level–this is the mission. However, once you get to know the data, you start understanding the importance of not everything being 100% public by default. When you come across listings Muslim faith, and LGBTQ services, or possibly domestic violence shelters, and needle exchanges. They are numerous types of listings where we need to be having sincere discussions around security and privacy concerns, and possibly think twice about publishing all or part of a dataset.

This is where modern approaches to API management can lend a hand. Where we can design specific endpoints, that pull specialized information for specific groups of people, and define who has access through API rate limiting. Right now my HSDA implementation has two access groups, public and private. Every GET path is publicly available, and if you want to POST, PUT, or DELETE data you will need an API key. As I consider my API management guidance for implementors, I’m adding a healthy dose of the how and why of privacy and security using existing API management solutions and practice.

I am not interested in playing decider when it comes to what data is public, private, and requires approval before getting access. I’m merely thinking about how API management can be applied in the name of privacy and security when it comes to public data, and how I can put tools in the hands of data stewards, and API providers that help them make the decision about what is public, private, and more tightly controlled. The trick with all of this is how transparent should providers be with the limits and restrictions imposed, and communicate the offical stance with all stakeholders appropriately when it comes to public data privacy and security.

Its Not Just The Technology: API Monitoring Means You Care

I was just messing around with a friend online about monitoring of our monitoring tools, where I said that I have a monitor setup to monitor whether or not I care about monitoring. I was half joking, but in reality, giving a shit is actually a pretty critical component of monitoring when you think about it. Nobody monitors something they don’t care about. While monitoring in the world of APIsn might mean a variety of things, I’m guessing that caring about those resources is a piece of every single monitoring configuration.

This has come up before in conversation with my friend Dave O’Neill of APIMetrics, where he tells stories of developers signing up for their service, running the reports they need to satisfy management or customers, then they turn off the service. I think this type of behavior exists at all levels, with many reasons why someone truly doesn’t care about a service actually performing as promised, and doing what it takes to rise to the occasion–resulting in the instability, and unreliability that APIs that gets touted in the tech blogosphere.

There are many reasons management or developers will not truly care when it comes to monitoring the availability, reliability, and security of an API. Demonstrating yet another aspect of the API space that is more business and politics, than it is ever technical. We are seeing this play out online with the flakiness of websites, applications, devices, and the networks we depend on daily, and the waves of breaches, vulnerabilities, and general cyber(in)security. This is a human problem, not a technical, but there are many services and tools that can help mitigate people not caring.

API Rate Limiting At The DNS Layer

I just got an email from my DNS provider CloudFlare about rate limiting and protecting my APIs. I am a big fan of CloudFlare, partly because I am a customer, and I use to manage my own infrastructure, but also partly due to the way they understand APIs, and actively use them as part of their business, products, and services.

Their email spans a couple areas of my research that I find interesting, and extremely relevant: 1) DNS, 2) Security, 3) Management. They are offering me something that is traditionally done at the API management layer (rate limiting), but now offering to do it for me at the DNS layer, expanding the value of API rate limiting into the realm of security, and specifically in defense against DDoS attacks--a serious concern.

Talk about an easy way to add value to my world as an API provider. One that is frictionless, because I'm already depending on them for the DNS layer of my web, and API layers of operations. All I have to do is signup for the new service, and begin dialing it in for my all of my APIs, which span multiple domains--all conveniently managed using CloudFlare.

Another valuable thing CloudFlare's approach does, in my opinion, is to reintroduce the concept of rate limiting to the world of websites. This helps me in my argument that companies, organizations, institutions and government agencies should be considering having APIs to alleviate website scraping. Using CloudFlare they can now rate limit the website while pointing legitimate use cases to the API where their access can be measured, metered, and even monetized when it makes sense.

I'm hoping that CloudFlare will be exposing all of these services via their API, so that I can automate the configuration of rate limiting for my APIs at the DNS level using APIs. As I design and deploy new API endpoints I want them automatically protected at the DNS layer using CloudFlare. I don't want to have to do extra work when it comes to securing and managing web or API access. I just want a baseline for all of my operations, and when I need I can customize per specific domains, or down to the individual API path level--the rest is automated as part of my continuous integration workflows.

With Each API We Increase The Attack Surface Area

It is easy for me to get excited about a new API. I'm an engineer. I'm a dude. I am the API Evangelist. It easy to think about the potential for good when it comes to APIs. It is much harder to suspend the logical side of my brain and think about the ways in which APIs can be used in negative ways. As a technologist it is natural for me to focus in on the technology, and tune out the rest of the world--it is what we do. It takes a significant amount of extra effort to stop, suspend the portion of your brain that technology whispers to, and think about the unintended consequences, and the pros and cons of why we are doing APIs.

Technologists aren't very good at slowing down and thinking about the pros/cons of connecting something to the Internet, let alone whether or not an API should even exist in the first place (it has to exist!). As I read a story about the increases in DDOS attacks on the network layer of our online world, I can't help but think that with each new API we deploy, that we are significantly increasing the attack surface area for our businesses, organizations, institutions, and government agencies. It feels like we are good at thinking about the amazing API potential, but we really suck at seeing what a target we are putting on our back when we do APIs.

We seem to be marching forward, drunk on the potential of APIs and Internet-connected everything. We aren't properly securing the technology we have, something we can see playing out with each wave of vulnerabilities, breaches, and leaks. We are blindly pushing forward with new API implementations, and using the same tactics we are using for our web and mobile technology, something we are seeing play out with the Internet of Things, and the vulnerable cameras, printers, and another object we are connecting to the Internet using APIs.

With each API we add, we are increasing the attack surface area for our systems and devices. APIs can be secured, but from what I'm seeing we aren't investing in security with our existing APIs, something that is being replicated with each wave of deployments. We need to get better at thinking about the negative consequences of doing APIs. We need to stop making ourselves targets. We need to get better at thinking about whether or not an API should exist or not. We need a way to better visualize the target surface area we've crafted for ourselves using APIs, and be a little more honest with ourselves about why we are doing this.

Discovering New APIs Through Security Alerts

I tune into a number of different channels looking for signs of individuals, companies, organizations, institutions, and government agencies doing APIs. I find APIs using Google Alerts, monitoring Twitter and Github, using press releases and via patent filings. Another way I am learning to discover APIs is via alerts and notifications about security events.

An example of this can be found via the Industrial Control Systems Cyber Emergency Response Team out of the U.S. Department of Homeland Security (@icscert), with the recent issued advisory ICSA-16-287-01 OSIsoft PI Web API 2015 R2 Service Acct Permissions Vuln to ICS-CERT website, leading me to the OSIsoft website. They aren't very forthcoming with their API operations, but this is something I am used to, and in my experience, companies who aren't very public with their operations tend to also cultivate an environment where security issue go unnoticed.

I am looking to aggregate API related security events and vulnerabilities like the feed coming out of Homeland Security. This information needs to be shared more often, opening up further discussion around API security issues, and even possibly providing an API for sharing real-time updates and news. I wish more companies, organizations, institutions, and government agencies would be more public with their API operations and be more honest about the dangers of providing access to data, content, and algorithms via HTTP, but until this is the norm, I'll continue using API related security alerts and notifications to find new APIs operating online.

An Auditing API For Checking In On API Client Activity

Google just released a mobile audit solution for their Google Apps Unlimited users looking to monitor activity across iOS and Android devices. At first look, the concept didn't strike me as anything I should write about, but once I got to thinking about how the concept applies beyond mobile to IoT, and the potentially for external 3rd party auditing of API and endpoint consumption--it stood out as a pattern I'd like to have in the filing cabinet for future reference.

Using the Google Admin SDK Reports API you can access mobile audit information by users, device, or by auditing event. API responses include details about the device including model, serial numbers, user emails, and any other element that included as part of device inventory. This model seems like it could easily be adapted to IoT devices, bot and voice clients.

One aspect that stood out for me as a pattern I'd like to see emulated elsewhere, is the ability to verify that all of your deployed devices are running the latest security updates. After the recent IoT launched DDOS attack on Krebs on Security, I would suggest that the security camera industry needs to consider implementing an audit API, with the ability to check for camera device security updates.

Another area that caught my attention was their mention that "mobile administrators have been asking for is a way to take proactive actions on devices without requiring manual intervention." Meaning you could automate certain events, turning off, or limiting access to specific API resources. When you open this up to IoT devices, I can envision many benefits depending on the type of device in play.

There are two dimensions of this story for me. That you can have these audit events apply to potentially any client that is consuming API resources, as well as the fact that you can access this data in real time, or on a scheduled basis via an API. With a little webhook action involved, I could really envision some interesting auditing scenarios that are internally executed, as well as an increasing number of them being executed by external 3rd party auditors making sure mobile, devices, and other API-driven clients are operating as intended.

A Dedicated Security Page For Your API Portal

One area I am keeping an eye on while profiling APIs, and API service providers, are any security-related practices that I can add to my research. While looking through DataDog I came across their pretty thorough security page, providing some interesting building blocks that I will add to my API security research. This is all I do as the API Evangelist--aggregate the best practices of existing providers, and shine a light on what they are up to. 

On their security page, DataDog provides details on physical and corporate security, information about data in transit, at rest, as well as retention, including personally identifiable information (PII), and details surrounding customer data access. They also provide details of their monitoring agent and how it operates, as well as how they patch, employ SSO, and require their staff to undergo security awareness training. The important part of this is that they encourage you to disclose any security issues you find--critical for providers to encourage this.

Transparency when it comes to security practice is an important tool in our API security toolbox. It is important that API providers share their security practices like DataDog does, helping build trust, and demonstrate competency when it comes to operations. I'm working on an API security page template for my default API portal, and DataDog's approach provides me with some good elements I can add to my template.

Helping You Address The Security Gap In Your API Infrastructure With Sapience

Welcome to our latest APIWare project--Sapience! Our team's response to a need for a more API focused security scanning solution. At the end of 2015, the team was looking for our next project, and they asked me for my thoughts on what the biggest need was in the API sector, based upon my monitoring as the API Evangelist -- I quickly responded with security. 

Sapience is currently in beta, but I wanted to take a moment to share some of the thinking that has gone into Sapience, and the current state of security when it comes to APIs. We feel pretty strongly that like security, API security is a very large and daunting challenge, and we need to work hard to peel back the layers a bit, and get to work on better securing digital infrastructure that is increasingly being made available via web APIs.

Being API-First Helps With Security
The first stop when it comes to API security is just doing them, and making it a priority across all websites, mobile, and device-based developed, as well as system to system integration. Using a consist interface to access all of your digital assets help ensure consistency, allowing for potentially more accountability as part of overall security efforts. API-first is the first step of any successful API security strategy. 

SSL All The APIs By Default 
Encryption is one of the most important tools in our security toolboxes, but unfortunately is also something that still is not the default mode for API providers. Whether its costs associated with certificates and implementation or legacy beliefs around the performance tax encryption can bring, APIs are not always SSL by default. SSL by default is the second step of any successful API security strategy.

API Management Provides Authentication
As I studied the API security landscape the leading API management providers often dominate the conversation, with their ability to secure APIs using keys, OAuth, and other increasingly common solutions. API management is definitely a significant portion of the frontline when it comes to API security, the problem is when the conversation stops here, and API providers are not actively testing and pushing on their infrastructure at this front line. 

Securing The Known Universe With API Definitions And Discovery
Another layer of API security discussions that emerged as I studied the landscape was the important role API definitions are playing when it comes to securing API infrastructure. In short, you can't secure wheat you don't know about, and having your API-first infrastructure well defined using common API definition formats, is significantly helping API providers get their security house in order. 

Automated Scanning For Most Common API Vulnerabilities
After API-first practices, SSL by default, modern API management solutions, and robust API definition and discovery work, we get to where Sapience excels--scanning this API infrastructure for common vulnerabilities. I know, many of you will want a magic pill that will address all of our security needs, but in our rush to deploy APIs for the rapidly expanding mobile landscape, many companies are not actively securing existing infrastructure for the most common threats.

In my monitoring of the space I regularly come across technology solutions that will provide comprehensive online security solutions, and even more agencies who will help you secure your company's online presence, but as of January 2016 there were no API-specific, SaaS solutions that help address even the most simple vulnerabilities when it came to security. This is why I identified security as the number one problem out there, and why the APIWare jumped at the opportunity to develop an API specific solution.

APIs have provided a much healthier approach to defining the digital infrastructure of companies, organizations, institutions, and government agencies for the last 10 years. The next stage of this evolution is continuing to bring security out of the IT shadows, and acknowledge that much of this infrastructure is running on the open Internet, even if it is hidden behind the web, mobile, or Internet of things applications. At APIWare, we want to help lead this conversation, and this is why we started Sapience.

Contact us today, to get started scanning your critical API infrastructure today.

We Need to Change the Psychology of Security

A Healthy Stance On Privacy And Security When It Comes To Healthcare APIs

I am reading through the API task force recommendations out of the Office of the National Coordinator for Health Information Technology (ONC), to help address privacy and security concerns around mandated API usage as part of the Common Clinical Data Set, Medicare, and Medicaid Electronic Health Records. The recommendations contain a wealth of valuable insights around healthcare APIs but are also full of patterns that we should be applying across other sectors of our society where APIs making an impact. To help me work through the task force's recommendations, I will be blogging through many of the different concepts at play. 

Beyond the usage of "patient-directed APIs" that I wrote about earlier, I thought the pragmatic view on API privacy and security was worth noting. When it comes to making data, content, and other digital resources available online, I hear the full spectrum of concerns, and it leaves me optimistic to hear government agencies speak about security and privacy in such a balanced way.

Here is a section from the API task force recommendations:

Like any technology, APIs allow new capabilities and opportunities and, like any other technology, these opportunities come with some risks. There are fears that APIs may open new security vulnerabilities, with apps accessing patient records "for evil", and without receiving proper patient authorization. There are also fears that APIs could provide a possible "fire hose" of data as opposed to the "one sip at a time" access that a web site or email interface may provide.

In testimony, we heard almost universally that, when APIs are appropriately managed, the opportunities outweigh the risks. We heard from companies currently offering APIs that properly managed APIs provide better security properties than ad-hoc interfaces or proprietary integration technology.

While access to health data via APIs does require additional considerations and regulatory compliance needs, we believe existing standards, infrastructure, and identity proofing processes are adequate to support patient directed access via APIs today.

The document is full of recommendations on how to strike this balance. It is refreshing to hear such a transparent vision of what APIs can be. They weigh the risks, alongside the benefits that APIs bring to the table while also being fully aware that a "properly managed API" provides its own security. Another significant aspect of these recommendations for me is that they also touch on the role that APIs will play in the regulatory and a compliance process.

I have to admit, the area of healthcare APIs isn't one of the most exciting stacks in the over 50 areas I track on across the API space, but I'm fully engaged with this because of the potential of a blueprint for privacy and security that can be applied with other types of APIs. When it comes to social, location, and other data the bar has been set pretty low when it comes to privacy and security, but health care data is different. People tend to be more concerned with access, security, privacy, and all the things we should already be discussing when it comes to the rest of our digital existence--opening the door for some valuable digital literacy discussions.

Hopefully, I don't run you off with all my healthcare API stories, and you can find some gems in the healthcare API task force's recommendations, like I am. 

Competing Views Around The Value And Ownership Of Digital Resources Impacting API Security

I was reading a post about how having an unclear sense of ownership hurts API security, which showcases the different views on who owns security, when it comes to exposing corporate digital assets via APIs. When I read the title, I anticipated the story being about a difference in how ownership of the asset(s) itself is viewed, but it ended up focusing on the ownership of security itself, not ownership of the assets which are being exposed -- something I think gets closer to the root of the problem, than who "owns security". 

In short, the IT and developers who are often charged with exposing corporate assets via APIs will view those digital resource in some very different ways, than other people at the company. These groups will focus on exposing digital resources in a very technical sense, making them available so that others can integrate into their apps and systems--it isn't always in their nature to secure things sensibly. Their focus is to open up access, something the article touches on, but is something I think it goes deeper than just being about API security. Developers and IT are rarely ever going to see the digital resource in the same way that business stakeholders will, let alone security focused players (hopefully your IT and dev team has specialist influencing things).

APIs have made a name for themselves because a handful of companies successfully exposed their digital resources in this new way, allowing external perspectives of those digital resources to enter the conversation, which allowed for innovation to occur. In this handful of API origin stories we tell like to tell, owners of the digital resources at play were open to outside views of what their digital resource was, and how that resource could be put to use. These leading companies were open to an alternative view of the ownership and access of these digital resources, something that allowed API platforms to flourish << This is not something that will happen in all situations.

APIs really begin to go wrong, when the sense of ownership around digital resource is already unhealthy, resulting in what my friend Ed Anuff speaks of, with everyone doing the API economy wrong. Without proper buy-in, developers and IT will often overlook security around resources being exposed--they just don't understand the importance of the resource in the same way. Coming from the opposite direction, business users will often come in and apply their "wet blanket" sense of ownership on the platform--resulting in heavy handed registration and approval flows, sales cycle(s), pricing, rate limits, and other common things you see slow API adoption.

APIs should be about us exposing our digital resources using the now ubiquitous Internet technology, in a way that opens up our resources, and the culture our business, organizations, institutions, and government agencies to outside views about what our resources are, and how they can be put to use. This is something that when done in the right environment, can reap some serious benefits for everyone involved, but when done in a culture where there is a already an imbalance around what digital resource ownership is, shit can really go wrong -- with security being just one stage where it plays out. In the end, APIs are not for everyone. Some people just have too strict of a view around the value of their digital resources, and of the ownership of that resource, for the API thing to every actually work--with company IT and developer security practices being just a symptom of a much larger illness.

There Are Two Types of Online Security Discussions Going On Currently, One Is Real, And The Other Is Theater

I added "security" as an area of my monitoring of the API space a couple years back, where I curate news stories, white papers, and other resources on the topic of online security. I have carved off most of the API related aspects of this into my security research project, but I have also noticed another schism emerging in much of the information I'm gathering during the course my work.

There are two distinct types of conversations going on, the first one is about security, encryption, and many of the technical aspects of how we protect ourselves, our businesses, and our organizations online. The second half focus on the theater around all of this, using many of the same words, but is far less technical, much more emotional, and wraps itself in special words like "cybersecurity".

This security theater is used by the NSA, FBI, police, and other government organizations, but it isn't exclusive to these groups. Tech giants like Apple, Google, Facebook, and Twitter also play the security theater card when it benefits them. Kind of like a soccer player will overplay a foul, to get the attention of the referee. It can be difficult to tell the difference when these groups are truly discussing security, or putting on a play or skit on the online security stage--for the rest of our benefits.

Security theater, aka "cybersecurity" plays just as important role in security, as the technical nuts and bolts. If you get citizens acting on their emotions and fears, it is much easier to shift the conversation in your favor, without ever having to do anything truly technical. Cybersecurity is not that new, it is just an evolution of previous security fear tactics used by the military industrial complex, and government leaders to control the population--it is just now being adapted to the new digital landscape we find ourselves living in.

Security theater will continue to be a box office hit in coming years!

Be More Transparent With Your Security Using APIs

Security Will Increasingly Be Used AS Component Of Tiered API Planning

As I look through the business models of leading API providers I am profiling, I'm increasingly seeing security as a selling point. When API providers break down their pricing into tiers, they are usually very good at breakdown down the elements of what goes into each plan--this is what I have been studying for last couple weeks. 

When I come across security leveraged as part of API plans, it is rarely a part of the free or entry levels, and is something you usually see in the higher level paid plans, and enterprise tiers. Here is an example of this, in a screen shot from the Box pricing page.

These plans are part of the SaaS side of the Box operations, but Box are pretty unique in that they have separate pricing tiers for the SaaS side of their operations, and a related, but addition set of pricing for the API side of their platforms. However, Box's approach provide the best example of this in action, with add-ons beyond the plan based pricing, that were also very security focused.

Box is a document platform that services the enterprise, which includes numerous, very heavily regulated industries. It makes sense that they are emphasizing security. My focus is that security is leveraged as specific feature of individual plans, with an emphasis on it being present in the upper tiers, and with add-ons.

This really isn't news. It makes sense that an emotionally charged element like security is used as a component of planning and pricing. I'm just looking to document security as a component of API planning for my research, educate other platforms about the potential for this type of use, but I am also looking to better understand how different companies, in different industries are wielding it (or not).

I predict that security will increasingly be used as a component like this in SaaS and API planning. In the current, very insecure online environment we all are living in, individuals and companies will pay a premium for real or perceived security. My goal is to better discovery when security is wielded like this, and try to better understand when it is a real component of API planning versus when its just used as an emotional way to convince people to upgrade to higher level plans.

You Have Had Three Security Strikes, Hand Over Your Data Storage Operations For 12 Months Plus Probation #APIDesignFiction

Your company has had its third strike. Your first security breach was in January of 2018, with the second only six months later in June, and the last came in February of 2019. Your company has shown that it just doesn't have what it takes to secure your users data, and there we are going to need you to have over the keys for your data storage, for a 12 month period. 

During this time, you will access your company data via APIs, that are monitored via a professional data management organization, as well as state and federal auditors to make sure all required security and privacy procedures are followed. All server, database, and storage operating procedures will be documented, and shared with your organization when the 12 month period is complete.

We have assessed that 60% of your infrastructure uses APIs, so the switch-over to the new infrastructure will not be that difficult. Your company will have 45 days to accomplish the other 40% of refactoring all your software to use APIs. Part of your illness within your organization was that this 40% of your operations was technical debt that your organization refused to bring up to speed--resulting in several large breaches. 

When we hand your data storage infrastructure back to you, we will audit for another 18 month period to ensure you are practicing a 100% API strategy, as well as end-to-end encryption, making sure it is applied for all servers, storage, and in transit using SSL. During this 18 month period our auditors will assess whether you have the resources to bring your operations up to an acceptable level. If you do not meet requirements, the period can be extended, or your infrastructure can be ordered back into a forced-management situation again.

If you have any questions, please contact your case manager, and your IT operations manager will be in touch shortly with more details on the coming transition period.

If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.