This website is currently dormant!
RSS

API Definitions News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is defining not just their APIs, but their schema, and other moving parts of their API operations.

Sadly Stack Exchange Blocks API Calls Being Made From Any Of Amazon's IP Block

I am developing an authentication and access layer for the API Gallery that I am building for Streamdata.io, while also federating it for usage as part of my API Stack research. In addition to building out these catalogs for API discovery purposes, I’m also developing a suite of tools that allow users to subscribe to different topics from popular sources like GitHub, Reddit, and Stack Overflow (Exchange). I’ve been busy adding one or two providers to my OAuth broker each week, until the other day I hit a snag with the Stack Exchange API.

I thought my Stack Exchange API OAuth flow had been working, it’s been up for a few months, and I seem to remember authenticating against it before, but this weekend I began getting an error that my IP address was blocked. I was looking at log files trying to understand if I was making too many calls, or some other potential violation, but I couldn’t find anything. Eventually I emailed Stack Exchange to see what their guidance once, to which I got a prompt reply:

“Yes, we block all of Amazon’s AWS IP addresses due to the large amount of abuse that comes from their services. Unfortunately we cannot unblock those addresses at this time.”

Ok then. I guess that is that. I really don’t feel like setting up another server with another provider just so I can run an OAuth server from there. Or, maybe I guess I might have to if I expect to offer a service that provides OAuth integration with Stack Exchange. It’s a pretty unfortunate situation that doesn’t make a whole lot of sense. I can understand adding another layer of white listing for developers, pushing them to add their IP address to their Stack Exchange API application, and push us to justify that our app should have access, but blacklisting an entire cloud provider from accessing your API is just dumb.

I am going to weigh my options, and explore what it will take to setup another server elsewhere. Maybe I will start setting up individual subdomains for each OAuth provider I add to the stack, so I can decouple them, and host them on another platform, in another region. This is one of those road blocks you encounter doing APIs that just doesn’t make a whole lot of sense, and yet you still have to find a work around–you can’t just give in, despite API providers being so heavy handed, and not considering the impact of the moves on their consumers. I’m guessing in the end, the Stack Exchange API doesn’t fit into their wider business model, which is something that allows blind spots like this to form, and continue.


Twice The Dose Of Vanick Digital At APIStrat in Nashville, TN Next Month

We are kicking it into overdrive now that the schedule is up for APIStrat in Nashville, TN this September 24th through 26th. From now until the event at the end of September you are going to hear me talk about all the amazing speakers we have, the companies they work for, and the interesting things they are all doing with APIs. One of the perks of being a speaker or a sponsor at APIStrat–you get coverage on API Evangelist, a become part of the buzz around the 9th edition of the API Strategy & Practice Conference (APIStrat), now operated by the OpenAPI Initiative (OAI) and the Linux Foundation.

Today’s post is about my friends over at the digital solutions and API management agency Vanick Digital. With APIStrat coming to their backyard, and their ability to capture the attention of the APIStrat program committee, Vanick Digital has two separate talks this year:

  • Securing the Full API Stack by Patrick Chipman - APIs open up new channels for sharing and consuming data, but whenever you open a new channel, new security risks emerge. Additionally, APIs often involve a variety of new components, such as API gateways, in-memory databases, edge caches, facade layers, and microservice-aligned data stores that can complicate the security landscape. How and where do you apply the right controls to ensure your API and your data are secure? In this session, we’ll answer that question by identifying the different components commonly used in the delivery of API products. For each layer, we’ll discuss the security risks that can and should be mitigated there, along with best practice approaches (including ABAC, OAuth2, and more) to implement those mitigations.

  • What Do You Mean By “API as a Product”? by Lou Powell - You may have heard the term “API Product.” But what does it mean? In this talk I will introduce the concept and explain the benefits and challenges of transforming your organization to view your APIs as measurable products that expose your companies capabilities, creating agility, autonomy, and acceleration. Traditional product manufacturers create new product and launch them into the marketplace and then measure value; we will teach you to view your APIs in the same way. Concepts covered in this presentation will be designing APIs with Design Thinking, funding your product, building teams, marketing your API, managing your marketplace, and measuring success.

Showcasing their skills as an API focused agency, by bringing it to the stage at APIStrat–smart! I am currently working with their team to understand how API Evangelist and Vanick Digital can work more closely together on projects. Helping me support the customers I’m reaching with my storytelling and workshops, delivering, scaling, and managing the day to day details I don’t have the time to provide for my customers. So it makes me happy to see them at APIStrat, sharing their wisdom, and demonstrating what they are capable of. If you are under resourced like many API providers are, I recommend coming to APIStrat and meeting with the team, or if it can’t wait until September, feel free to reach out directly–just let them know where you found them.

APIStrat is seven weeks away, so make sure you get registered. The workshop, session, and keynote lineup is locked up, but we still have a handful of sponsorship opportunities available. You can find the sponsorship prospectus on the web site, or feel free to contact me directly and I’ll get you plugged in with the events team. Make sure you don’t miss out on an opportunity to be part of this ongoing API conversation that we’ve kept going since 2013–where API developers, architects, designers, and API business leaders, evangelists, advocates, and the API curious gather to discuss where the API space is headed. Now that APIStrat is operated by the OpenAPI Initiative it makes it the place to be if you want to contribute to the road map for the OpenAPI specification, and influence the direction of the API specification. No matter how you choose to get involved, we look forward to seeing you all in Nashville next month!


Practical SecDevOps for APIs From @42Crunch At APIStrat In Nashville This Fall

We are gearing up for the next edition of APIStrat in Nashville, TN this September 24th through 26th. With the conference less than two months away, and the schedule up, I’m building momentum with my usual drumbeat about the speakers, and companies involved. So you’ll be reading a lot of stories related to APIStrat in coming weeks, where I’m looking to build awareness and attendance of the conference, but more importantly showcasing the individuals and companies who are supporting it and helping making the 9th edition of APIStrat amazing.

One of innovative startups I’m partnering with right now, and who you will find speaking and sponsoring APIStrat is 42Crunch. Full disclosure, I’m regularly talking with 42Crunch regarding their road map, and I consider them an API Evangelist partner, however, this is because I find them to be one of the more progressive, and important API startups out there right now. 42Crunch is important in my opinion, because they are focusing on API security, a critical stop along the API lifecycle, and also because of the OpenAPI-driven, awareness building approach to delivering API security. 42Crunch isn’t just bringing API security solutions to the table for you to purchase as a service, they bring API security solutions to the table that help you invest in your internal API security practices–which is the most critical aspect of what they do in my opinion.

42Crunch is focused on API security, but they are what I consider to be a full API lifecycle solution. Meaning they play nicely as one of the tools in your API lifecycle toolbox. Which begins with being OpenAPI-driven, and treating your API’s definition as a contract, but with 42Crunch it is about using this contract to empower your API team to make API security a first-class citizen across all stops along the API lifecycle. Not just at the API management layer, or as an after thought later on when you scan your infrastructure–42Crunch is baking security into your OpenAPI contract, proxying your APIs, and ensuring the right security policies are being applied consistently across the API lifecycle. Helping you think of API security all along the way from design to deployment, to management, testing, monitoring, and deprecation.

If you want to learn more about what 42Crunch offers you should be registered for APIStrat, and joining us in Nashville, TN. My friend Isabelle Mauny will be giving her talk on Practical SecDevOps for APIs–here is the abstract for her talk: In an ever agile world, API security must become a commodity. By working with security “ON” as early as possible, API developers can detect vulnerabilities when they are easy to fix. By continuously testing APIs for issues, they can ensure vulnerabilities do not sneak in later in the lifecycle. In this session, Isabelle presents a SecDevOps methodology and shares practical solutions for API security assessment, API protection and security monitoring. You should be taking advantage of this opportunity to learn more about what 42Crunch has to offer, and speaking with Isabelle in person about where API security is going in 2018, because they are the people pushing forward the conversation as part of the OpenAPI Initiative, and with their API security services.

Make sure you don’t miss out on the API conversation in Nashville this fall. Get registered for APIStrat, and make sure you are participating in the ongoing API discussion that is APIStrat. While the conference will continue to be an environment where developers and business folk to gather and discuss the technology, business, and politics of APIs, this year will also have a music focus because of the venue. Making it something you will not want to miss out, with all the keynotes, sessions, workshops, and hallway and late night conversations with all API leaders from across the sector.


Ping Identity Acquires ElasticBeam To Establish New API Security Solution

You don’t usually find me writing about API acquisitions unless I have a relationship with the company, or there are other interesting aspects of the acquisition that makes it noteworthy. This acquisition of Elastic Beam by Ping Identity has a little of both for me, as I’ve been working with the Elastic Beam team for over a year now, and I’ve been interested in what Ping Identity is up to because of some research I am doing around open banking in the UK, and the concept of an industry level API identity and access management, as well as API management layer. All of which makes for an interesting enough mix for me to want to quantify here on the blog and load up in my brain, and share with my readers.

From the press release, “Ping Identity, the leader in Identity Defined Security, today announced the acquisition of API cybersecurity provider Elastic Beam and the launch of PingIntelligence for APIs.” Which I think reflects some of the evolution of API security I’ve been seeing in the space, moving being just API management, and also being about security from the outside-in. The newly combined security solution, PingIntelligence for APIs, focuses in on automated API discovery, threat detection & blocking, API deception & honeypot, traffic visibility & reporting, and self-learning–merging the IAM, API management, and API security realms for me, into a single approach to addressing security that is focused on the world of APIs.

While I find this an interesting intersection for the world of APIs in general, where I’m really intrigued by the potential is when it comes to the pioneering open banking API efforts coming out of the UK, and the role Ping Identity has played. “Ping’s IAM solution suite, the Ping Identity Platform, will provide the hub for Open Banking, where all UK banks and financial services organizations, and third-party providers (TPPs) wanting to participate in the open banking ecosystem, will need to go through an enrollment and verification process before becoming trusted identities stored in a central Ping repository.” Which provides an industry level API management blueprint I think is worth tuning into.

Back in March, I wrote about the potential of the identity, access management, API management, and directory for open banking in the UK to be a blueprint for an industry level approach to securing APIs in an observable way. Where all the actors in an API ecosystem have to be registered and accessible in a transparent way through the neutral 3rd party directory, before they can provide or access APIs. In this case it is banking APIs, but the model could apply to any regulated industry, including the world of social media which I wrote about a couple months back as well after the Cambridge Analytics / Facebook shitshow. Bringing API management and security out into the open, making it more observable and accountable, which is the way it should be in my opinion–otherwise we are going to keep seeing the same games being played we’ve seen with high profile breaches like Equifax, and API management lapses like we see at Facebook.

This is why I find the Ping Identity acquisition of ElasticBeam interesting and noteworthy. The acquisition reflect the evolving world of API security, but also has real world applications as part of important models for how we need to be conducting API operations at scale. ElasticBeam is a partner of mine, and I’ve been talking with them and the Ping Identity team since the acquisition. I’ll keep talking with them about their road map, and I’ll keep understanding how they apply to the world of API management and security. I feel the acquisition reflects the movement in API security I’ve been wanting to see for a while, moving us beyond just authentication and API management, looking at API security through an external lens, exploring the potential of machine learning, but also not leaving everything we’ve learned so far behind.


People Still Think APIs Are About Giving Away Your Data For Free

After eight years of educating people about sensible API security and management, I’m always amazed at how many people I come across who still think public web APIs are about giving away access to your data, content, and algorithms for free. I regularly come across very smart people who say they’d be doing APIs, but they depend on revenue from selling their data and content, and wouldn’t benefit from just putting it online for everyone to download for free.

I wonder when we stopped thinking the web was not about giving everything away for free? It is something I’m going to have to investigate a little more. For me, it shows how much education we still have ahead of us when it comes to informing people about what APIs are, and how to properly manage them. Which is a problem, when many of the companies I’m talking to are most likely doing APIs to drive internal systems, and public mobile applications. They are either unaware of the APIs that already exist across their organization, or think that because they don’t have a public developer portal showcasing their APIs, that they are much more private and secure than if they were openly offering them to partners and the public.

Web API management has been around for over a decade now. Requiring ALL developers to authenticate when accessing any APIs, and the ability to put APIs into different access tiers, limit that the rate of consumption, while logging and billing for all API consumption isn’t anything new. Amazon has been extremely public about their AWS efforts, and the cloud isn’t a secret. The fact that smart business leaders see all of this and do not see that APIs are driving it all represents a disconnect amongst business leadership. It is something I’m going to be testing out a little bit more to see what levels of knowledge exist across many fortune 1000 companies, helping paint of picture of how they view the API landscape, and help me quantify their API literacy.

Educating business leaders about APIs has been a part of my mission since I started API Evangelist in 2010. It is something that will continue to be a focus of mine. This lack of awareness is why we end up with damaging incidents like the Equifax breach, and the Cambridge Analytica / Facebook scandal. Its how we end up with so many trolls on Twitter, and an out of balance API ecosystems across federal, state, and municipal governments. It is a problem that we need to address in the industry, and work to help educate business leaders around common patterns for securing and managing our API resources. I think this process always begins with education and API literacy, but is a symptom of the disconnect around storytelling about public vs private APIs, when in reality there are just APIs that are secured and managed properly, or not.


Making Connections At The API Management Layer

</pI've been evaluating API management providers, and this important stop along the API lifecycle in which they serve for eight years now. It is a space that I'm very familiar with, and have enjoyed watching it mature, evolve, and become something that is more standardized, and lately more commoditized. I've enjoyed watching the old guard (3Scale, Apigee, and Mashery) be acquired, and API management be baked into the cloud with AWS, Azure, and Google. I've also had fun learning about Kong, Tyk, and the next generation API management providers as they grow and evolve, as well as some of the older players like Axway as they work to retool so that they can compete and even lead the charge in the current environment. I am renewing efforts to study what each of the API management solutions provide, pushing forward my ongoing API management research, understanding what the current capacity of the active providers are, and potentially they are pushing forward the conversation. One of the things I'm extremely interested in learning more about is the connector, plugin, and extensibility opportunities that exist with each solution. Functionality that allows other 3rd party API service providers to inject their valuable services into the management layer of APIs, bringing other stops along the API lifecycle into management layer, allowing API providers to do more than just what their API management solution delivers. Turning the API management layer into much more than just authentication, service plan management, logging, analytics, and billing. Over the last year I've been working with [API security provider ElasticBeam](https://www.elasticbeam.com/) to help make sense of what is possible at the API management layer when it comes to securing our APIs. ElasticBeam can analyze the surface area of an API, as well as the DNS, web, API management, web server, and database logs for potential threats, and apply their machine learning models in real time. Without direct access at the API management layer, ElasticBeam is still valuable but cannot respond in real-time to threats, shutting down keys, blocking request, and other threats being leveraged against our API infrastructure. Sure, you can still respond after the fact based upon what ElasticBeam learns from scanning all of your logs, but without being able to connect directly into your API management layer, the effectiveness of their security solution is significantly diminished. Complimenting, but also contrasting ElasticBeam, I'm also working with [Streamdata.io](http://streamdata.io) to help understand how they can be injected at the API management layer, adding an event-driven architectural layer to any existing API. The first part of this would involve turning high volume APIs into real time streams using Server-Sent Events (SSE). With future advancements focused on topical streaming, webhooks, and WebSub enhancements to transform simple request and response APIs into event-driven streams of information that only push what has changed to subscribers. Like ElasticBeam, Streamdata.io would benefit being directly baked into the API management layer as a connector or plugin, augmenting the API management layer with a next generation event-driven layer that would compliment what any API management solution brings to the table. Without an extensible connector or plugin layer at the API management layer you can't inject additional services like security with ElasticBeam, or event-driven architecture like Streamdata.io. I'm going to be looking for this type of extensibility as I profile the features of all of the active API management providers. I'm looking to understand the core features each API management provider brings to the table, but I'm also looking to understand how modern these API management solutions are when it comes to seamlessly working with other stops along the API lifecycle, and specifically how these other stops can be serviced by other 3rd party providers. Similar to my regular rants about API service providers always having APIs, you are going to hear me rant more about API service providers needing to have connector, plugin, and other extensibility features. API management service providers can put their APIs to work driving this connector and plugin infrastructure, but it should allow for more seamless interaction and benefits for their customers, that are brought to the table by their most trusted partners.


Facebook And Twitter Only Now Beginning To Police Their API Applications

I’ve been reading about all the work Facebook and Twitter have been doing over the last couple of weeks to begin asserting more control over their API applications. I’m not talking about the deprecation of APIs, that is a separate post. I’m focusing on them reviewing applications that have access to their API, and shutting off access to the ones who are’t adding value to the platform and violating the terms of service. Doing the hard work to maintain a level of quality on the platform, which is something they should have been doing all along.

I don’t want to diminish the importance of the work they are doing, but it really is something that should have been done along the way–not just when something goes wrong. This kind of behavior really sets the wrong tone across the API sector, and people tend to focus on the thing that went wrong, rather than the best practices of what you should be doing to maintain quality across API operations. Other API providers will hesitate launching public APIs because they’ll not want to experience the same repercussions as Facebook and Twitter have, completely overlooking the fact that you can have public APIs, and maintain control along the way. Setting the wrong precedent for API providers to emulate, and damaging the overall reputation of operating public APIs.

Facebook and Twitter have both had the tools all along to police the applications using their APIs. The problem is the incentives to do so, and to prioritize these efforts isn’t there, due to an imbalance with their business model, and a lack of diversity in their leadership. When you have a bunch of white dudes with a libertarian ethos pushing a company towards profitability with a advertising driven business model, investing in quality control at the API management layer just isn’t a priority. You want as may applications, users, and activity as you possibly can, and when you don’t see the abuse, harassment, and other illnesses, there really is no problem from your vantage point. That is, until you get called out in the press, or are forced to testify in front of congress. The reasons us white dudes get away with this is that there are no repercussions, we just get to ignore until it becomes a problem, apologize, perform a little bit to show we care, and wait until the next problem occurs.

This is the wrong API model to put out there. API providers need to see the benefits of properly reviewing applications that want access to their APIs, and the value of setting a higher bar for how applications use the API. There should be regular reviews of active APIs, and audits of how they are accessing, storing, and putting resources to work. This isn’t easy work, or inexpensive to do properly. It isn’t something you can put off until you get in trouble. It is something that should be done from the beginning, and conducted regularly, as part of the operations of a well funded team. You can have public APIs for a platform, and avoid privacy, security, and other shit-shows. If you need an example of doing it well, look at Slack, who has a public API that is successful, even with a high level of bot automation, but somehow manages to stay out of the spotlight for doing dumb things. It is because their API management practices are in better alignment with their business model–the incentives are there.

For the next 3-5 years I’m going to have to hear from companies who aren’t doing public APIs, because they don’t want to make the same mistake as Facebook and Twitter. All because Facebook and Twitter have been able to get away with such bad behavior for so long, avoid doing the hard work of managing their API platforms, and receive so much bad press. All in the name of growth and profits at all cost. Now, I’m going to have to write a post every six months showcasing Facebook and Twitter as pioneers for how NOT to run your platforms, explaining the importance of healthy API management practices, and investing in your API teams so they have the resources to do it properly. I’d rather have positive role models to showcase rather than poorly behaved role models who I have to work overtime to change perception and alter API provider’s behavior. As an API community let’s learn from what has happened and invest properly in our API management layers, properly screen and get to know who is building application on our resources, and regularly tune into and audit their behavior. Yes, it takes more investment, time, and resources, but in the end we’ll all be better off for it.


OpenAPI Is The Contract For Your Microservice

I’ve talked about how generating an OpenAPI (fka Swagger) definition from code is still the dominate way that microservice owners are producing this artifact. This is a by-product of developers seeing it as just another JSON artifact in the pipeline, and from it being primarily used to create API documentation, often times using Swagger UI–which is also why it is still called Swagger, and not OpenAPI. I’m continuing my campaign to help the projects I’m consulting on be more successful with their overall microservices strategy by helping them better understand how they can work in concert by focus in on OpenAPI, and realizing that it is the central contract for their service.

Each Service Beings With An OpenAPI Contract There is no reason that microservices should start with writing code. It is expensive, rigid, and time consuming. The contract that a service provides to clients can be hammered out using OpenAPI, and made available to consumers as a machine readable artifact (JSON or YAML), as well as visualized using documentation like Swagger UI, Redocs, and other open source tooling. This means that teams need to put down their IDE’s, and begin either handwriting their OpenAPI definitions, or being using an open source editor like Swagger Editor, Apicurio, API GUI, or even within the Postman development environment. The entire surface area of a service can be defined using OpenAPI, and then provided using mocked version of the service, with documentation for usage by UI and other application developers. All before code has to be written, making microservices development much more agile, flexible, iterative, and cost effective.

Mocking Of Each Microservice To Hammer Out Contract Each OpenAPI can be used to generate a mock representation of the service using Postman, Stoplight.io, or other OpenAPI-driven mocking solution. There are a number of services, and tooling available that takes an OpenAPI, an generates a mock API, as well as the resulting data. Each service should have the ability to be deployed locally as a mock service by any stakeholder, published and shared with other team members as a mock service, and shared as a demonstration of what the service does, or will do. Mock representations of services will minimize builds, the writing of code, and refactoring to accommodate rapid changes during the API development process. Code shouldn’t be generated or crafted until the surface area of an API has been worked out, and reflects the contract that each service will provide.

OpenAPI Documentation Always AVailable In Repository Each microservice should be self-contained, and always documented. Swagger UI, Redoc, and other API documentation generated from OpenAPI has changed how we deliver API documentation. OpenAPI generated documentation should be available by default within each service’s repository, linked from the README, and readily available for running using static website solutions like Github Pages, or available running locally through the localhost. API documentation isn’t just for the microservices owner / steward to use, it is meant for other stakeholders, and potential consumers. API documentation for a service should be always on, always available, and not something that needs to be generated, built, or deployed. API documentation is a default tool that should be present for EVERY microservice, and treated as a first class citizen as part of its evolution.

Bringing An API To Life Using It’s OpenAPI Contract Once an OpenAPI contract has been been defined, designed, and iterated upon by service owner / steward, as well as a handful of potential consumers and clients, it is ready for development. A finished (enough) OpenAPI can be used to generate server side code using a popular language framework, build out as part of an API gateway solution, or common proxy services and tooling. In some cases the resulting build will be a finished API ready for use, but most of the time it will take some further connecting, refinement, and polishing before it is a production ready API. Regardless, there is no reason for an API to be developed, generated, or built, until the OpenAPI contract is ready, providing the required business value each microservice is being designed to deliver. Writing code, when an API will change is an inefficient use of time, in a virtualized API design lifecycle.

OpenAPI-Driven Monitoring, Testing, and Performance A read-to-go OpenAPI contract can be used to generate API tests, monitors, and deliver performance tests to ensure that services are meeting their business service level agreements. The details of the OpenAPI contract become the assertions of each test, which can be executed against an API on a regular basis to measure not just the overall availability of an API, but whether or not it is actually meeting specific, granular business use cases articulated within the OpenAPI contract. Every detail of the OpenAPI becomes the contract for ensuring each microservice is doing what has been promised, and something that can be articulated and shared with humans via documentation, as well as programmatically by other systems, services, and tooling employed to monitor and test accordingly to a wider strategy.

Empowering Security To Be Directed By The OpenAPI Contract An OpenAPI provides the entire details of the surface area of an API. In addition to being used to generate tests, monitors, and performance checks, it can be used to inform security scanning, fuzzing, and other vital security practices. There are a growing number of services and tooling emerging that allow for building models, policies, and executing security audits based upon OpenAPI contracts. Taking the paths, parameters, definitions, security, and authentication and using them as actionable details for ensuring security across not just an individual service, but potentially hundreds, or thousands of services being developed across many different teams. OpenAPI quickly is becoming not just the technical and business contract, but also the political contract for how you do business on web in a secure way.

OpenAPI Provides API Discovery By Default An OpenAPI describes the entire surface area for the request and response of each API, providing 100% coverage for all interfaces a services will possess. While this OpenAPI definition will be generated mocks, code, documentation, testing, monitoring, security, and serving other stops along the lifecycle, it provides much needed discovery across groups, and by consumers. Anytime a new application is being developed, teams can search across the team Github, Gitlab, Bitbucket, or Team Foundation Server (TFS), and see what services already exist before they begin planning any new services. Service catalogs, directories, search engines, and other discovery mechanisms can use OpenAPIs across services to index, and make them available to other systems, applications, and most importantly to other humans who are looking for services that will help them solve problems.

OpenAPI Deliver The Integration Contract For Client OpenAPI definitions can be imported in Postman, Stoplight, and other API design, development, and client tooling, allowing for quick setup of environment, and collaborating with integration across teams. OpenAPIs are also used to generate SDKs, and deploy them using existing continuous integration (CI) pipelines by companies like APIMATIC. OpenAPIs deliver the client contract we need to just learn about an API, get to work developing a new web or mobile application, or manage updates and version changes as part of our existing CI pipelines. OpenAPIs deliver the integration contract needed for all levels of clients, helping teams go from discovery to integration with as little friction as possible. Without this contract in place, on-boarding with one service is time consuming, and doing it across tens, or hundreds of services becomes impossible.

OpenAPI Delivers Governance At Scale Across Teams Delivering consistent APIs within a single team takes discipline. Delivering consistent APIs across many teams takes governance. OpenAPI provides the building blocks to ensure APIs are defined, designed, mocked, deployed, documented, tested, monitored, perform, secured, discovered, and integrated with consistently. The OpenAPI contract is an artifact that governs every stop along the lifecycle, and then at scale becomes the measure for how well each service is delivering at scale across not just tens, but hundreds, or thousands of services, spread across many groups. Without the OpenAPI contract API governance is non-existent, and at best extremely cumbersome. The OpenAPI contract is not just top down governance telling what they should be doing, it is the bottom up contract for service owners / stewards who are delivering the quality services on the ground inform governance, and leading efforts across many teams.

I can’t articulate the importance of the OpenAPI contract to each microservice, as well as the overall organizational and project microservice strategy. I know that many folks will dismiss the role that OpenAPI plays, but look at the list of members who govern the specification. Consider that Amazon, Google, and Azure ALL have baked OpenAPI into their microservice delivery services and tooling. OpenAPI isn’t a WSDL. An OpenAPI contract is how you will articulate what your microservice will do from inception to deprecation. Make it a priority, don’t treat it as just an output from your legacy way of producing code. Roll up your sleeves, and spend time editing it by hand, and loading it into 3rd party services to see the contract for your microservice in different ways, through different lenses. Eventually you will begin to see it is much more than just another JSON artifact laying around in your repository.


Investment In API Security Will Continue To Fall Short While There Is No Breach Accountability

I have a lot of conversations with folks down in the trenches about API security, and what they are doing to be proactive when it comes to keeping their API infrastructure secure. The will and the desire amongst folks I talk to regarding API security is present. They want to do what it takes to truly understand what is needed to keep their APIs secure, but many have their hands tied because lack of resources to actually do what is needed. Every API team I know is short-handed, and doing the best they can with what they have available to them. A lack of investment in API security isn’t always intentional, it ends up just begin a reality of the priorities on the ground within the organizations they work in.

While I’m sure leaders within these companies are concerned about breaches within their API infrastructure, the urgency to invest in this area isn’t always a priority. Despite an increase in high profile, often API-induced breaches, IT and API groups are still not given the amount of resources they need to do something about potential security incidents. Other than the stress and bad press of a breach, there really are no consequences in the United States. We have seen this play out over and over, and when high profile breaches like Equifax go unpunished, other corporate leaders fully understand that there will no consequences, so why invest in preventative measures–we will just respond to it “if” it happens.

This is why GDPR, and other similar legislation will become important to the API security industry. Without real civil or criminal penalties involved with breaches, and even heavier penalties for poorly handled breaches, companies just aren’t going to care. Data is just a replaceable commodity, and a company can recover from the hit to their brand when a breach does occur. Making the investment in proactive API security training, staffing, services, and processes an unnecessary thing. Reflecting how health care is handled in this country, with 95% of the investment in treating things after they happen, and about 5% investment in preventative care. Hoping all along you don’t get sick, or have a breach.

I can talk until I blue in the face to business leaders about API security, and make them aware of healthy practices, but if there isn’t an incentive to invest in API security, it will never happen. At this point I feel that API security is more a reflection of a wider systemic illness around how we view data, and that country and industry level policy is where change needs to occur. I will keep showcasing specific building blocks of an API security strategy, as well as showcase services and tools to help you implement your strategy, but I feel like the most meaningful change will have to occur at the policy level. Otherwise business leaders will never prioritize API security, leaving all of OUR data vulnerable to exploitation–it is just a cost of doing business at this point.


General Data Protection Regulation (GDPR) Forcing Us To Ask Questions About Our Data

I’ve been learning more about the EU General Data Protection Regulation (GDPR) recently, and have been having conversation about compliance with companies in the EU, as well as the US. In short, GDPR requires anyone working with personal data to be up front about the data they collect, making sure what they do with that data is observable to end-users, and takes a privacy and security by design approach when it comes to working with all personal data. While the regulations seems heavy handed and unrealistic to many, it really reflects a healthy view of what personal data is, and what a sustainable digital future will look like.

The biggest challenge with becoming GDPR compliant is the data mess most companies operate in. Most companies collect huge amounts of data, believing it is essential to the value they bring to the table, without no real understanding of everything that is being collected, and any logical reasons behind why it is gathered, stored, and kept around. A “gather it all”, big data mentality has dominated the last decade of doing business online. Database groups within organizations hold a lot of power and control because of the data they possess. There is a lot of money to be made when it comes to data access, aggregation, and brokering. It won’t be easy to unwind and change the data-driven culture that has emerged and flourished in the Internet age.

I regularly work with companies who do not have coherent maps of all the data they possess. If you asked them for details on what they track about any given customer, very few will be able to give you a consistent answer. Doing web APIs has forced many organizations to think more deeply about what data they posses, and how they can make it more discoverable, accessible, and usable across systems, web, mobile, and device applications. Even with this opportunity, most large organizations are still struggling with what data they have, where it is stored, and how to access it in a consistent, and meaningful way. Database culture within most organizations is just a mess, which contributes to why so many are freaking out about GDPR.

I’m guessing many companies are worried about complying with GDPR, as well as being able to even respond to any sort of regulatory policing event that may occur. This fear is going to force data stewards to begin thinking about the data the have on hand. I’ve already had conversations with some banks who are working on PSD2 compliant APIs, who are working in tandem on GDPR compliance efforts. Both are making them think deeply about what data they collect, where it is stored, and whether or not it has any value. Something I’m hoping will force some companies to stop collecting some of the data all together, because it just won’t be worth justifying its existence in the current cyber(in)secure, and increasingly accountable regulatory environment.

Doing APIs and becoming GDPR compliant go hand in hand. To do APIs you need to map out the data landscape across your organization, something that will contribute to GDPR. To respond to GDPR events, you will need APIs that provide access to end-users data, and leverage API authentication protocols like OAuth to ensure partnerships, and 3rd party access to end-users data are accountable. I’m optimistic that GDPR will continue to push forward healthy, transparent, and observable conversations around our personal data. One that focuses on, and includes the end-users who’s data we are collecting, storing, and often time selling. I’m hopeful that the stakes become higher, regarding the penalty for breaches, and shady brokering of personal data, and that GDPR becomes the normal mode of doing business online in the EU, and beyond.


An OpenAPI Vendor Extension For Defining Your API Au

The food delivery service Zalando has an interesting approach to classifying their APIs based upon who is consuming them. It isn’t just about APIs being published publicly, or privately, they actually have standardized their definition, and have established an OpenAPI vendor extension, so that the definition is machine readable and available via their OpenAPI.

According to the Zalando API design guide, “each API must be classified with respect to the intended target audience supposed to consume the API, to facilitate differentiated standards on APIs for discoverability, changeability, quality of design and documentation, as well as permission granting. We differentiate the following API audience groups with clear organisational and legal boundaries.

  • component-internal - The API consumers with this audience are restricted to applications of the same functional component (internal link). All services of a functional component are owned by specific dedicated owner and engineering team. Typical examples are APIs being used by internal helper and worker services or that support service operation.
  • business-unit-internal - The API consumers with this audience are restricted to applications of a specific product portfolio owned by the same business unit.
  • company-internal - The API consumers with this audience are restricted to applications owned by the business units of the same the company (e.g. Zalando company with Zalando SE, Zalando Payments SE & Co. KG. etc.)
  • external-partner - The API consumers with this audience are restricted to applications of business partners of the company owning the API and the company itself.
  • external-public - APIs with this audience can be accessed by anyone with Internet access.

Note: a smaller audience group is intentionally included in the wider group and thus does not need to be declared additionally. The API audience is provided as API meta information in the info-block of the Open API specification and must conform to the following specification:

#/info/x-audience: type: string x-extensible-enum: - component-internal - business-unit-internal - company-internal - external-partner - external-public description: | Intended target audience of the API. Relevant for standards around quality of design and documentation, reviews, discoverability, changeability, and permission granting.

Note: Exactly one audience per API specification is allowed. For this reason a smaller audience group is intentionally included in the wider group and thus does not need to be declared additionally. If parts of your API have a different target audience, we recommend to split API specifications along the target audience — even if this creates redundancies (rationale).

Here is an example of the OpenAPI vendor extension in action, as part of the info block:

swagger: ‘2.0’ info: x-audience: company-internal title: Parcel Helper Service API description: API for <…> version: 1.2.4

Providing a pretty interesting way of establishing the scope and reach of each API in a way that makes each API owner think deeply about who they are / should be targeting with the service. Done in a way that makes the audience focus machine readable, and available as part of it’s OpenAPI definition which can be then used across discovery, documentation, and through API governance and security.

I like the multiple views of who the audience could be, going beyond just public and private APIs. I like that it is an OpenAPI vendor extension. I like that they even have a schema crafted for the vendor extension–another interesting concept I’d like to see more of. Overall, making for a pretty compelling approach to define the reach of our APIs, and quantifying the audience we are looking to reach with each API we publish.


Explaining API Security To Organizational Leadership

I’ve been tasked with helping explain API security to senior leadership, and wanted to work through my ideas here on the blog. For this audience, I’m not going to get down into the weeds regarding the technical specification behind OAuth, and other approaches, and try to keep things high level, introducing folks to the art that is API security. The phrase API security represents a balance of concepts because APIs are by nature about providing access, while security is about controlling and sometimes limiting access, resulting in a new way of getting business done on the open web.

First, What Are APIs? APIs are not the latest trend, or vendor solution, they are the next evolution in the web. Web sites and applications return HTML via a URL, and meant to display information to humans in a browser, while APIs return JSON or XML of the same information, but meant to be used in other applications and systems. API security is designed to allow access to our digital resources using the web, while also securing it in a way to ensure only the intended audience is able to obtain access. APIs are designed to securely provide access to data, content, media, and algorithms using the same web that us humans use to access information online via our browsers.

Access Using Secure URLs APIs use web URLs get read and write data, content, media, and to allow engagement with algorithms. If you want a list of press releases, you visit https://api.example.com/press/. If you want a list of contacts from the CRM, you visit https://api.example.com/contacts/. The URL for all API resources should be encrypted by default, protecting all requests and responses in transit. Providing the first layer of security for APIs, ensuring only approved consumers can view data, content, media, and valuable algorithms being transmitted online.

Registration Always Required For APIs A common misconcept about web APIs is that they are all like Twitter, and are publicly available on the web. However, almost all web APIs actually require that you register before you get any access to any resources. This process is the beginning of what is called API management, where developers have to sign up for an account, and in some cases be approved before they get access to APIs. Most of the time there is self-service, automatic registration, but developers only get limited access, which once they’ve been approved, proven their identity, or put in a credit card–will be able to obtain higher levels of access to resources.

Application Keys For Each API Call Once developers have been approved for access, they can begin making API calls. However, each API will require that API keys, and required credentials are present with each call. Providing identification of every API consumer, and exactly what they are consuming. API keys are often seen as all that is needed to properly securing APIs, but in reality, they are much more about identifying and tracking what API consumers are doing. Going beyond just focusing on securing the digital resources, and really developing an awareness of who is accessing what, which will prove to be more valuable than just requiring registration to acess APIs alone.

Suite Of Authentication Options There are a handful of approaches in use when it comes to requiring developers to authenticate and pass along their API keys with each response. Each approach has pros and cons, but the industry has widely settled in on four main ways to require API developers to authenticate themselves when using APIs:

  • Basic Auth - Usage of the basic authentication format that is part of the standard HTTP operations, employing a username and password as credentials for accessing API resources.
  • Key Access - Providing simple tokens, often called API key as a common way to access to APIs, issuing one to each developer and per application they register.
  • JSON Web Token - JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object.
  • oAuth - Providing an oAuth layer to API operations, securing high value APIs, while also opening up a conversation between an API platform, developers, and end-users regarding the access of their content and data.

Depending on the security requirements of the resource, and whether or not users generated data and content is involved, you may select a different path for how you require API developers to authenticate. API keys via the URL or headers, as well as Basic Auth are the most common. with JSON Web Tokens, and OAuth being put to work in higher security environments, and where users data, and permission might be required.

Logging Of All API Activity A essential aspect of securing APIs involves the logging of ALL API calls, no matter who the consumer is, what application it is serving, and whether it is for internal, or external cosumption. All API calls get logged equally, and when all API developers are required to authenticate and pass their keys with each API call, all the evidence needed to understand API activity and consumption is present. When you combine the API level logging with backend database logs, and front-end DNS logs, you can define a set of perimiters that will help ensure the security of your API resources.

Modern API Management Solutions API management combines the authentication and logging described above with API plans, rate limiting, and analytics, to achieve a heightened awareness regarding who is accesss what API resources, and how they are putting them to use. Modern API management allows for the monitoring of API registrations, authentication, and consumption in real time, with controls for limiting or shutting off access whenever terms of service and security violations are identified. Providing API providers with the tools they need to monitor and respond to any security concerns, while staying in tune with exactly how resources are being accessed via APIs.

Defining Access To APIs Using Service Composition API management also allows API providers to develop different levels of API access plans, which govern the APIs that developers will have access to, and how much they are entitled to consume. API service composition is all about organizing different APIs into different plans, and setting rate limits that govern how much a user can use per second, minute, days, or by the month. New users are often placed into plans with stricter limitations, while more trusted, partner, and internal consumers enjoy higher rate limits, and less restrictive plans. Service composition helps minimize the damage that occurs whenver there are bad actors present, or security incidents occur, keeping the breaches limited to a small subset of low value APIs, and limited amount of resources accessed.

Monitoring All API Availability Beyond API management, and the analysis of API consumption, it is common for API providers to setup external monitors that keep an eye on whether APIs are up or down, and what their overall availability are. Providing a status dashboard showing whether APIs are available, which also often shows historical availability over time. The health and availability of an API is usually a barometer of the security of an API. Insecure, and compromised APIs often times have an unreliable availability and track record. Making monitoring critical to the overall security of API operations.

Granular Testing of All APIs Augmenting API management and monitoring, the most mature and secure API providers out there also run recurring tests on APIs, going beyond just seeing if they are up and available, and actually making sure they respond, and deliver the data and content that is expected. These tests will sometimes go further and test for the ability to publish bad data to APIs, input incorrect or additional information, and push the boundaries of what an API will respond to. Mimicking some of the behaviors in which malicious users and applications will perform, and testing the quality of the surface area of API.

Security Scans For All APIs Beyond granular tests for all APIs, more general security scans are regularly performed, looking beyond the potentially known security problems, and finding the more unknown issues. Scanning additional URLS, parameters, headers, and checking for holes, gaps, and other areas of the surface area for APIs which may have been overlooked or forgotten. Security scanning reflects the scanning that already occurs on most web and mobile applications, but will also consider many of the API specific vulnerabilities that we’ve seen behind breaches of the past, and those that are unique to API integration scenarios.

API Security Is More About Building An Awareness While API security centers around establishing a defensive perimeter around API resources, policing and enforcing rules along this perimeter, and encrypting all traffic, most of API security is realized through an awareness around how APIs are being accessed. API management, logging, monitoring, testing, and analytics provide an approach to understanding how data, content, media, and algorithms are being used, or not being used. Providing an evolved level of awareness that goes well beyond legacy web services and database connectivity.

API security should center around encryption, and common approaches to authenticating APIs. However, if they are not being properly monitored, tested, analyzed, and audited, a strong security perimeter, encryption, and strong authentication will not mean much. It is important for all key stakeholders involved in API operations to understand that API security is a balance between allowing for access and consumption, while also locking down, encrypting, and defending the perimeter. The API providers who find the most success with their API operations tend to strike a balance between these two opposing personalities of API security, making sure everything is secure, while also making sure providers, developers, and end-users feel secure while also being able to get access the resource they need to do their job.


VersionEye SDK Security Notifications

I’ve written about VersionEye a couple of times. They help you monitor the 3rd party code you use, keeping an eye on dependencies, license violations, and security issues. I’ve written about the license portion of this equation, but they came up again while doing my API security research, and I wanted to make sure I revisited what they were up to in this aspect of the API lifecycle, floating them up on my radar.

VersionEye is keeping an eye on multiple security databases and helps you monitor the SDKs you are using in your application. Inversely, if you are an API provider generating SDKs for your API consumers to put to use, it seems like you should be proactively leverage VersionEye to help you be the eye on the security aspects of your SDK management. They even help developers within their existing CI/CD workflows, which is something that you should be considering as you plan, craft, and support your APIs. Making it as easy for you to leverage your APIs SDKs in your own workflow, and doing the same for your consumers, while also paying attention to security at each step, breaking your CI/CD process when security is breached.

I also wrote about how VersionEye has open sourced their APIs a while back, highlighting how you can also deploy into any environment you desire. I’m fascinated by the model VersionEye provides for the API space. They are offering valuable services that help us manage our crazy worlds, with a viable commercial and open source offering, that integrates with your existing CI/CD workflow. Next, I’m going to study the dependency portion of what VersionEye offer, then take some time to better understand their business model and pricing. VersionEye is pretty close to what I like to see in a service provider. They don’t have all the shine of a brand new startup, but they have all the important elements that really matter.


Admit It You Do Not Respect Your API Consumers And End Users

Just admit it, you could care less about your API consumers. You are just playing this whole API game because you read somewhere that this is what everyone should be doing now. You figured you can get some good press out of doing an API, get some free work from developers, and look like you are one of the cool kids for a while. You do the song and dance well, you have developed and deployed an API. It will look like the other APIs out there, but when it comes to supporting developers, or actually investing in the community, you really aren’t that interested in rolling up your sleeves and making a difference. You just don’t really care that much, as long as it looks like you are playing the API game.

Honestly, you’d do any trend that comes along, but this one has so many perks you couldn’t ignore it. Not only do you get to be API cool, you did all the right things, launched on Product Hunt, and you have a presence at all the right tech events. Developers are lining up to build applications, and are willing to work for free. Most of the apps that get built are worthless, but the SDKs you provide act as a vacuum for data. You’ve managed to double your budget by selling the data you acquire to your partners, and other data brokers. You could give away your API for free, and still make a killing, but hell, you have to keep charging just so you look legit, and don’t raise any alarm bells.

It is hard to respect developers who line up and work for free like this. And the users, they are so damn clueless regarding what is going on, they’ll hand over their address book and location in real-time without ever thinking twice. This is just to easy. APIs are such a great racket. You really don’t have to do anything but blog everyone once in a while, show up at events and drink beer, and make sure the API doesn’t break. What a sweet gig huh? No, not really, you are just a pretty sad excuse of a person, and it will catch up with you somewhere. You really represent everything wrong with technology right now, and are contributing to the world being a worse place than it already is–nice job!

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


The Reason For Your API Security Breach: You Did Nothing

You just got three separate calls, and countless emails alerting to the fact that you just had a major security breach. You don’t know the extent of the damage yet, but it looks like they got into your primary customer database via the APIs you depend on for all your mobile applications. You are sitting in your office chair, sweating, and trying to figure out how this happened. I will tell you, it is because you have done nothing. You have de-prioritized security at every turn, resulting in an open door for any hacker to walk through.

Not only have you done nothing, you actually worked against anyone who brought up the topic of API security. You would respond: We don’t have the time. We don’t have the budget. We don’t have the skills. You never listened to anyone of your staff, even that security lady (what was her name?) you had hired last year, and then resigned, with a letter containing over 25 security holes she had been trying to take care of, but because of the toxic environment you’ve created, she was unable to do anything and moved on. You have created an environment where anyone who brings up security concerns feels persecuted, and even that their job is in jeopardy, making “doing nothing” the standard mode across all operations.

You have eight separate mobile applications which all use APIs, and all of them using the customer database in question, which also stores credit cards, which is in violation of your PCI compliance–you know, those forms you sign off on each year? You felt these mobile APIs were secure because they were hidden behind your mobile applications, and your developers had given you a application security scan report last year. In this situation you would love to blame these developers, but all roads lead to you when it comes to responsibility for this situation. You begin to feel sick to your stomach thinking about the 345,633 credit cards and other PII that was leaked. You know the numbers, because you have real time reports on how many customers you have. You just don’t have any real time reports for anything to do with security.

API security was everyones first concern when you first pitched these projects starting back in 2010, and you have managed to run for seven years without any major incidents. Each year you have just been more emboldened in your do nothing strategy, but everything has caught up with you now. What do you do? You don’t have a breach action plan. You don’t have sort of protocol for this type of situation, despite saying that you did several times in meetings. You better get to work dealing with the technical fallout from all of this, because it will last weeks, if not months. Then you get to also start dealing with the business, legal, and political fallout from this breach. Hey, there is a bright spot. The chances are pretty high you might not even have a job after all of this is pretty high as well. Enjoy!

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


Considering How Machine Learning APIs Might Violate Privacy and Security

I was reading about how Carbon Black, an endpoint detection and response (EDR) service, was exposing customer data via a 3r party API service they were using. The endpoint detection and response provider allows customers to optionally scan system and program files using the VirusTotal service. Carbon Black did not realize that premium subscribers of the VirusTotal service get access to the submitted files, allowing an company or government agency with premium access to VirusTotal’s application programming interface (API) can mine those files for sensitive data.

It provides a pretty scary glimpse at the future of privacy and security in a world of 3rd party APIs if we don’t think deeply about the solutions we bake into our applications and services. Each API we bake into our applications should always be scrutinized for privacy and security concerns, making sure end-users aren’t being subjected to unnecessary situations. This situation sounds like it was both API provider and consumer contributing to the privacy violation, and adjusting platform access levels, and communicating with API consumers would be the best path forward.

Beyond just this situation, I wanted to write about this topic as a cautionary tale for the unfolding machine learning API landscape. Make sure we are thinking deeply about what data and content we are making available to platforms via artificial intelligence and machine learning APIs. Make sure we are asking the hard questions about the security and privacy of data and content we are running through machine learning APIs. Make sure we are thinking deeply about what data and content sets we are running through the machine learning APIs, and reducing any unnecessary exposure of personal data, content, and media.

It is easy to be captivated by the magic of artificial intelligence and machine learning APIs. It is easy to view APIs as something external, and not much of a privacy or security threat. However, with each API call we are inviting a 3rd party API into our databases, files, and other private systems. Let’s make sure we have an honest conversation with our API providers about how data and content is accessed, stored, cached, and used as part of any AI or ML process. Let’s make sure we get clarification on which partners, or other 3rd party providers are getting access to data and content that is indexed and executed as part of AI and ML API requests and responses. How long are videos or images stored? How long is data stored?

I’m seeing more discussion around dependencies going on in the API space. Which software libraries, and APIs are we depending on for our applications and services. I’m feeling like this conversation is going to continue expanding and security, privacy, and observability is going to become a more significant part of these dependency discussions. It will be a conversation that continues to push API deployment on-premise, and on-premise, being observable about how ML and AI API operations are being logged, stored, and track on. I’m going to keep watching how APIs are intentionally or unintentionally violating security and privacy like this, and keep an eye on the API dependency conversation to see how it evolves as part of this security and privacy discussion.


The ElasticSearch Security APIs

I was looking at the set of security APIs over at Elasticsearch as I was diving into my API security research recently. I thought the areas they provide security APIs for the search platform was worth noting and including in not just my API security research, but also search, deployment, and probably overlap with my authentication research.

  • Authenticate API - The Authenticate API enables you to submit a request with a basic auth header to authenticate a user and retrieve information about the authenticated user.
  • Clear Cache API - The Clear Cache API evicts users from the user cache. You can completely clear the cache or evict specific users.
  • User Management APIs - The user API enables you to create, read, update, and delete users from the native realm. These users are commonly referred to as native users.
  • Role Management APIs - The Roles API enables you to add, remove, and retrieve roles in the native realm. To use this API, you must have at least the manage_security cluster privilege.
  • Role Mapping APIs - The Role Mapping API enables you to add, remove, and retrieve role-mappings. To use this API, you must have at least the manage_security cluster privilege.
  • Privilege APIs - The has_privileges API allows you to determine whether the logged in user has a specified list of privileges.
  • Token Management APIs - The token API enables you to create and invalidate bearer tokens for access without requiring basic authentication. The get token API takes the same parameters as a typical OAuth 2.0 token API except for the use of a JSON request body.

Come to think of it, I’ll add this to my API management research as well. Much of this overlaps with what should be a common set of API management services as well. Like much of my research, there are many different dimensions to my API security research. I’m looking to see how API providers are securing their APIs, as well as how service providers are selling security services to APIs providers. I’m also keen on aggregating common API design patterns for security APIs, and quantity how they overlap with other stops along the API lifecycle.

While the cache API is pretty closely aligned with delivering a search API, I think all of these APIs provide a potential building block to think about when you are deploying any API, and represents the Venn diagram that is API authentication, management, and security. I’m going through the rest of the Elasticsearch platform looking for interesting approaches to ensuring their search solutions are secure. I don’t feel like there are any search specific characteristics of API security that I will need to include in my final API security industry guide, but Elasticsearch’s approach has re-enforced some of the existing security building blocks I already had on my list.


Patent Number 9325732: Computer Security Threat Sharing

The main reason that I tend to rail against API specific patents is that much of what I see being locks up reflects the parts and pieces that are making the web work. I see things like hypermedia, and other concepts that are inherently about sharing, collaboration, and reuse–something that should never be patented. This concept applies to other patents I’m seeing, but rather than being about the web, it is about trust, and sharing of information. Things that shouldn’t be locked up, and exist within realms where the concept of patents actually hurt the web and APIs.

Today’s patent is out of Amazon, who are prolific patenters of web and API concepts. This one though is about the sharing of security threat sharing. Outlining something that should be commonplace on the web.

Title - Computer security threat sharing
Number - 09325732
Owner - Amazon Technologies, Inc.
Abstract - A computer security threat sharing technology is described. A computer security threat is recognized at an organization. A partner network graph is queried for security nodes connected to a first security node representing the organization. The first security node is connected to at least a second security node representing a trusted security partner of the organization. The second security node is associated with identification information. The computer security threat recognized by the organization is communicated to the trusted security partner using the identification information associated with the second security node.

I’m sorry. I just do not see this as unique, original, or remotely a concept that should be patentable. Similar to a previous patent on trust, I just don’t think that sharing of security information needs to be locked up. The USPTO should recognize this. I feel like this type of patent shows how broken the patent process is, and how distorted company’s views on what is a patentable idea. Honestly, these types of patents feel lazy to me, and lack any creativity, skills, or sensible view of how the web works.

I feel like I should start rating these patents with some sort of Rotten Tomato score, and start giving companies some sort of patent ranking for their portfolio. Something that encompasses the scope, lack of creativity, originality, and damaging effects of the patent. This reminds me that I need to finish my work pulling court cases from the Court Listener API, and index them for any companies, and their patent portfolios. Ultimately this is where the real damage to APIs and the web will play out, similar to the Oracle vs. Google API copyright affair, but I will keep sharing stories of these ridiculous patents, and maybe even start ranking them all by how much they stink.


An Open Source API Security Intelligence Gathering, Processing, And Distribution Framework

I was reading about GOSINT, the open source intelligence gathering and processing framework over at Cisco. “GOSINT allows a security analyst to collect and standardize structured and unstructured threat intelligence. Applying threat intelligence to security operations enriches alert data with additional confidence, context, and co-occurrence. This means that you are applying research from third parties to your event data to identify similar, or identical, indicators of malicious behavior.” The framework is written in Go, with a front-end in JavaScript frontend, and usage of APIs as threat intelligence sources.

When you look at configuration section on the README for GOSINT, you’ll see information for setting up threat intelligence feeds, including Twitter API, Alien Vault the Open Threat Community API, VirusTotal API, and the Collaborative Research Into Threats (CRITS). GOSINT acts as an API aggregator for a variety of threat information, which then allows you to scour the information for threat indicators, which you can evolve over time, providing a pretty interesting model for not just threat information sharing, but also API driven aggregation, curation and sharing.

GOSINT also has the notion of behaving as a “transfer station”, where you can export refined data as CSV or CRITS format. Right here seems like an opportunity for some Github integration, adding continuous integration and deployment to open source intelligence and processing workflows. Making sure refined, relevant threat information is available where it is needed, via existing API deployment and integration workflows. Wouldn’t take much to publish CSV, YAML, and JSON files to Github which can then be used to drive distributed dashboards, visualizations, and other awareness building tools. Plus, the refined threat information is now published as CSV/JSON/YAML on Github where it can be ingested by any system of application with access to the Github repository.

GOSINT is just one of the interesting tooling I’m coming across as I turn up the volume on my API security research, thanks to the investment of ElasticBeam my API security partner. They’ve invested in an API security guide, as well as white paper, which is something that will generate a wealth of stories like this along the way, as I find interesting API security artifacts. I’m looking to map out the API security landscape, but I’m also interested in understanding open source API aggregation, analysis, and syndication platforms that integrate with existing CI/CD workflows, to help feed my existing human services API work, and other city, state, and federal government API projects I’m working on.


Open Sourcing Your API Like VersionEye

I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.

One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:

VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continuous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.

Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continuous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.


Randomize IoT Device Username And Password By Default

I am totally hooked on POLITICO’s Morning Cybersecurity email. I’m not an email newsletter guy, but this is government cybersecurity wonky enough to keep me engaged each day. One of the bits that recently grabbed my attention was regarding what should be considered Internet of Things common sense.

New America’s Open Technology Institute argued that IoT device makers should start equipping their products with basic security from the start - including by randomizing each device’s default username and password, making it much harder for hackers to locate and take over poorly configured devices. “The ability to modify login credentials should not be taken as a replacement for the implementation, where possible, of unique passwords for every device sold,” OTI wrote. Also on the common-sense front, OTI said that IoT devices “must be designed in such a way that they can be patched or updated.”

I wish this was the default for ANYTHING we connect to the Internet. I wish that IoT manufacturers would make this the default without the government stepping in. I’m guessing there is more money in selling insecure devices, and defending against them, then actually securing Internet connected devices in the first place. From the number of breaches I’m tracking on each week, I’m guessing business will be good for a small handful of Internet of Things manufacturers in this climate.


Open Sourcing Your API Like VersionEye

I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.

One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:

VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.

Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.


When You See API Rate Limiting As Security

I’m neck deep into my assessment of the world of API security this week, a process which always yields plenty of random thoughts, which end up becoming stories here on the blog. One aspect of API security I keep coming across in this research is the concept of API rate limiting as being security. This is something I’ve long attributed with API management service providers making their mark on the API landscape, but as I dig deeper I think there is more to this notion of what API security is (or isn’t). I think it has more to do with API providers, than companies selling their warez to these API providers.

The API management service providers have definitely set the tone for API security conversation(good), by standing up a gateway, and providing tools for limiting what access is available–I think many data, content, and algorithmic stewards are very narrowly focus on security being ONLY about limiting access to their valuable resources. Many folks I come across see their resources as valuable, when they begin doing APIs they have a significant amount of concern around putting their resources on the Internet, and once you secure and begin rate limiting things, all security concerns appear to have been dealt with. Competitors, and others just can’t get at your valuable resources, they have to come through the gate–API security done.

Many API providers I encounter have unrealistic views of the value of their data, content, and algorithms, and when you match this with their unrealistic views about how much others want access to this valuable content you end up with a vacuum which allows for some very narrow views of what API security is. To help support this type of thinking, I feel like the awareness generated from API management is often focused on generating revenue, and not always about understanding API abuse, and is also something can create blindspots when it comes to database, server, and DNS level logging and layers where security threats emerge. I’m assuming folks often feel comfortable that the API management layer is sufficiently securing things by rate limiting, and we can see all traffic through the analytics dashboard. I’m feeling that this one of the reasons folks aren’t looking up at the bigger API security picture.

From what I’m seeing, assumptions that the API management layer is securing things can leave blind spots in other areas like DNS, threat information gathering, aggregation, collaboration, and sharing. I’ve come across API providers who are focused in on API management, but don’t have visibility at the database, server, container, and web server logging levels, and are only paying attention to what their API management dashboard provides access to. I feel like API management opened up a new found awareness for API provides, something that has evolved and spread to API monitoring, API testing, and API performance. I feel like the next wave of awareness will be in the area of API security. I’m just trying to explore ways that I can help my readers and clients better understand how to expand their vision of API security beyond their current field of vision.


Craft An OpenAPI For An Existing Threat Intelligence Sharing API Specification

I wrote about the opportunity around developing an aggregate threat information API, and got some interest in both creating, as well as investing in some of the resulting products and services that would be derived from this security API work. As part of the feedback and interest on that post, I was pointed in the direction of the Structured Threat Information Expression (STIX), a structured language for cyber threat intelligence, and Trusted Automated Exchange of Intelligence Information (TAXII), and transport mechanism for sharing cyber threat intelligence.

This is why I write about my projects openly like this, so that my readers can help me identify existing approaches for tackling whatever I am focusing on. I prefer to never reinvent the wheel, and build on top of any existing work that is already available. I’m thinking the next step is to craft an OpenAPI fo TAXII, and STIX. Creating a machine readable blueprint for deploying, managing, and documenting a threat intelligence API. I couldn’t find any existing work on an OpenAPI definition, so this seems like a logical place to begin working to build on, and augment the work of the Cyber Threat Intelligence Technical Committee. Clearly, the working group has created a robust set of specifications, but I’d like to help move it closer to implementation with an OpenAPI.

I have created a Github organization to help organize any work on this project. I have forked the project for STIX and TAXII there, as well as started a planning repository to coordinate any work I’m contributing to the conversation. I have also created a repository for working on and publishing the OpenAPI that will define the project. Once we have this, I’d like to start thinking about the development of a handful of server side implementations in maybe Node.js, Python, PHP, or other common programming language. Here are the next steps I’d like to see occur around this project:

  • OpenAPI - Create an OpenAPI for STIX and TAXII to provide a single representation of a threat intelligence sharing API.
  • Threat List - Take the threat intelligence list I originally published, and identify how any of the sources would map to OpenAPI.
  • Storytelling - Tell stories throughout the process to attract the attention of other players, contributors, and investors, so that this project can live on.

I’m not looking to own this project 100%. I just don’t have the time and resources. However I do want to see an OpenAPI move forward, as well as a wealth of open source resources for deploying, integrating, and aggregating around threat intelligence sharing. This work is bigger than any single player, and is something that needs to be open, spanning thousands of providers, not controlled by a handful of gatekeepers. Players in the threat intelligence sharing game need to be able to decide who they consume and share threat intelligence with, something that will require a federated world of APIs that all speak in a common language. The Cyber Threat Intelligence Technical Committee is off to a great start. I just want to contribute with some cycles to help bring their work in alignment with what is going on in the mainstream world of APIs, while also beating a drum so that I can bring more attention to any work going on in this important area. Our world is going to need significant investment in the area of threat intelligence sharing if we are going to be successful online in coming years.


The Trusted Automated Exchange of Intelligence Information (TAXII)

I recently wrote about the opportunity around developing an aggregate threat information API, and got some interest in both creating, as well as investing in some of the resulting products and services that would be derived from this security API work. As part of the feedback and interest on that post, I was pointed in the direction of the Trusted Automated Exchange of Intelligence Information (TAXII), as one possible approach to defining a common set of API definitions and tooling for the exchange of threat intelligence.

The description of TAXII from the project website describes it well:

Trusted Automated Exchange of Intelligence Information (TAXII) is an application layer protocol for the communication of cyber threat information in a simple and scalable manner. TAXII is a protocol used to exchange cyber threat intelligence (CTI) over HTTPS. TAXII enables organizations to share CTI by defining an API that aligns with common sharing models. TAXII is specifically designed to support the exchange of CTI represented in STIX.

I breezed through the documentation for TAXII version 2.0, and it looks pretty robust, and a project that has made some significant inroads towards accomplishing what I’d like to see out there for sharing threat intelligence. I’m still understanding the overlap of TAXII, the transport mechanism for sharing cyber threat intelligence, and STIX, the structured language for cyber threat intelligence, but it looks like a robust, existing approach defining the schema and an API for sharing threat intelligence.

Next, I am going to gather my thoughts around both of these existing definitions, and look at establishing an OpenAPI that represents STIX and TAXII, providing a machine readable definition for sharing threat intelligence. I think having an OpenAPI will provide a blueprint that can be used to define a handful of server side implementations in a variety of programming languages. I was happy to be directed to this existing work, saving me significant time and energy when it comes to this conversation. Now I don’t have to jumpstart it, I just have to contribute to, and augment the work that is already going on.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.