RSS

API Design News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API design conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is designing their APis and the underlying schema used as part of their requests and responses.

Working To Keep Programming At Edges Of The API Conversation

I’m fascinated by the dominating power of programming languages. There are many ideological forces at play in the technology sector, but the dogma that exists within each programming language community continues to amaze me. The potential absence of programming language dogma within the world of APIs is one of the reasons I feel it has been successful, but alas, other forms of dogma tends to creep in around specific API approaches and philosophies, making API evangelism and adoption always a challenge.

The absence of programming languages in the API design, management, and testing discussion is why they have been so successful. People in these disciplines have ben focused on the language agnostic aspects of just doing business with APIs. It is also one of the reasons the API deployment conversation still is still so fragmented, with so many ways of getting things done. When it comes to API deployment, everyone likes to bring their programming language beliefs to the table, and let it affect how we actually deliver this API, and in my opinion, why API gateways have the potential to make a comeback, and even excel when it comes to playing the role of API intermediary, proxy, and gateway.

Programming language dogma is why many groups have so much trouble going API first. They see APIs as code, and have trouble transcending the constraints of their development environment. I’ve seen many web or HTTP APIs called Java API, Python APIs, or reflect a specific language style. It is hard for developers to transcend their primary programming language, and learn multiple languages, or think in a language agnostic way. It is not easy for us to think out of our boxes, and consider external views, and empathize with people who operate within other programming or platform dimensions. It is just easier to see the world through our lenses, making the world of APIs either illogical, or something we need to bend to our way of doing things.

I’m in the process of evolving from my PHP and Node.js realm to a Go reality. I’m not abandoning the PHP world because many of my government and institutional clients still operate in this world, and I’m using Node.js for a lot of serverless API stuff I’m doing. However I can’t ignore the Go buzz I keep coming stumbling upon. I also feel like it is time for a paradigm shift, forcing me out of my comfort zone and push me to think in a new language. This is something I like to do every five years, shifting my reality, keeping me on my toes, and forcing me to not get too comfortable. I find that this keeps me humble and thinking across programming languages, which is something that helps me realize the power of APIs, and how they transcend programming languages, and make data, content, algorithms, and other resources more accessible via the web.


When The Right API Solution Is Not Always The Sensible One

I conducted an API workshop at Genscape in Boston yesterday. I thoroughly enjoyed the conversation with the technical folks there, and found their pragmatic, yet educated views on APIs refreshing. I spent the day going through the usual stops along the API lifecycle, but found some of our breakout conversations during the API design section to be what has stuck with me on my train ride back to New York. Specifically the discussions around versioning, content negotiation and the overall design of paths, parameters, and potentially providing query language layers on top of APIs.

After going through much of my textbook API design patterns for paths, and parameters, then working my through content negotiation and headers, we kept finding ourselves talking about what their clients were capable of. Bringing the API design discussion back around to why REST is still dominate in my opinion–simplicity, and lowering the bar for consumers. Time after time we found ourselves talking about their target consumers, the spreadsheet wielding data analyst, and how the bar needed to be kept low to make sure their needs were being met. Sure, we can quickly get academic and theoretical with the design practices, but if they fall on deaf ears, and API consumers do not adopt and use the API-driven tools, does it matter?

I know that all of us API “thawt liters” would rather folks just do things the right way, but this isn’t always the reality on the ground. Most people within real world businesses don’t have the time or luxury to learn the right way of doing things, and take the time to disrupt their flow with new ways of doing things. Most of the time we need to just get people the data and other resources they need, in the tooling they are comfortable with (spreadsheet cough cough), and get out of their way. Sure, we should still be introducing people to new concepts, and working to strike a balance when it comes to the API design patterns we adopt, but it should be about striking a balance between reality on the ground, and the future we want to see.

During the workshop we kept coming back around to simple, plain language, URIs that provide flexibility with path variables, as well as potentially relevant query parameters, that allow people to get CSV and JSON representations of the resources they need. With Excel as the target client, once again I find myself minimizing header usage, and maximizing the paths, and simplifying the data models in which folks can retrieve via APIs. I see this in industry after industry, and across the government agencies I am working with. I know that we all want our APIs to reflect the best possible patterns, and leverage the latest technology, but many of the people who will be potentially consuming our APIs, just want to get at the resources we are serving up, and do not have the time and the bandwidth to get on board with anything too far beyond their current mode of getting business done.

None of this will shift me evangelizing the “proper way to design APIs”, but it reminds me (once again), at how immovable the business world can actually be. APIs are having a significant impact on how we develop web, mobile, desktop, and device applications, but one of the reasons web APIs have found so much success is that they are simple, scrappy, and flexible. Being able to serve up valuable data and other resources through simple URLs, allowing anyone to take them and put to use in the application of their choice has fed the explosion in the API we see across the landscape today. It is why we still see just as many poorly designed APIs in 2018 as we saw in 2010. It is because the best design doesn’t always win. Sometimes you just need the right design for the job, and the one that will make sense to the audience who will be consuming it. It is why hypermedia, GraphQL, and other design patterns will continue to be the choice of some practitioners, but simple, plain, REST will keep dominating the landscape.


How Should We Be Organizing All Of Our Microservices?

A question I get regularly in my API workshops is, “how should we be organizing all of our microservices?” To which I always recommend they tune into what the API Academy team is up to, and then I dance around give a long winded answer about how hard it is for me to answer that. I think in response, I’m going to start asking for a complete org chart for their operations, list of all their database schema, and a list of all their clients and the industries they are operating in. It will still be a journey for them, or me to answer that question, but maybe this response will help them understand the scope of what they are asking.

I wish I could provide simple answers for folks when it came to how they should be naming, grouping, and organizing their microservices. I just don’t have enough knowledge about their organization, clients, and the domains in which they operate to provide a simple answer. It is another one of those API journeys an organization will have to embark on, and find their own way forward. It would take so much time for me to get to know an organization, its culture, resources, and how they are being put to use, I hesitate to even provide any advice, short of pointing them to what the API Academy team publishes books, and provides talks on. They are the only guidance I know that goes beyond the hyped of definition of microservices, and actually gets at the root of how you do it within specific domains, and tackle the cultural side of the conversation.

A significant portion of my workshops lately have been about helping groups think about delivering services using a consistent API lifecycle, and showing them the potential for API governance if they can achieve this consistency. Clearly I need to back up a bit, and address some of the prep work involved with making sure they have an organizational chart, all of the schema they can possibly bring to the table, existing architecture and services in play, as well as much detail on the clients, industries, and domains in which they operate. Most of my workshops I’m going in blind, not knowing who will all be there, but I think I need a section dedicated to the business side of doing microservices, before I ever begin talking about the technical details of delivering microservices, otherwise I will keep getting questions like this that I can’t answer.

Another area that is picking up momentum for me in these discussions is a symptom of of the lack of API discovery, and directly related to the other areas I just mentioned. You need to be able to deliver APIs along a lifecycle, but more importantly you need to be able to find the services, schema, and people behind them, as well as coherently speak to who will be consuming them. Without a comprehensive discovery, and the ability to understand all of these dependencies, organizations will never be able to find the success they desire with microservices. They won’t be any better than the monolithic way many organizations have been doing things to date, it will just be much more distributed complexity, which will achieve the same results as the monolithic systems that are in place today.


Using Jekyll To Look At Microservice API Definitions In Aggregate

I’ve been evaluating microservices at scale, using their OpenAPI definitions to provide API design guidance to each team using Github. I just finished a round of work where I took 20 microservices, evaluated each OpenAPI definition individually, and made design suggestions via an updated OpenAPI definition that I submitted as a Github issue. I was going to submit as a pull request, but I really want the API design guidance to be a suggestions, as well as a learning experience, so I didn’t want them to feel like they had to merge all my suggestions.

After going through all 20 OpenAPI definitions, I took all of them and dumped them into a single local repository where I was running Jekyll via localhost. Then using Liquid I created a handful of web pages for looking at the APIs across all microservices in a single page–publishing two separate reports:

  • API Paths - An alphabetical list of the APIs for all 20 services, listing the paths, verbs, parameters, and headers in a single bulleted list for evaluation.
  • API Schema - An alphabetical list of the APIs for all 20 services, listing the schema definitions, properties, descriptions, and types in a single bulleted list for evaluation.

While each service lives in its own Github repository, and operates independently of each other, I wanted to see them all together, to be able to evaluate their purpose and design patterns from the 100K level. I wanted to see if there was any redundancy in purpose, and if any services overlapped in functionality, as well as trying to see the potential when all API resources were laid out on a single page. It can be hard to see the forest for the trees when working with microservices, and publishing everything as a single document helps flatten things out, and is the equivalent of climbing to the top of the local mountain to get a better view of things.

Jekyll makes this process pretty easy to do by dumping all OpenAPI definitions into a single collection, which I can then navigate easily using Liquid on a single HTML page. Providing a comprehensive report of the functionality across all microservices. I am calling this my microservices monolith report, as it kind of reassembles all the individual microservices into a single view, letting me see the collective functionality available across the project. I’m currently evaluating these service from an API governance perspective, but at a later date I will be taking a look from the API integration perspective, and trying to understand if the services will work in concert to help deliver on a specific application objective.


Looking At 20 Microservices In Concert

I checked out the Github repositories for twenty microservices of one of my clients recently, looking understand what is being accomplished across all these services as they work independently to accomplish a single collective objective. I’m being contracted with to help come in blindly and provide feedback on the design of the APIs being exposed across services, and help provide guidance on their API lifecycle, as well as eventually API governance when things have matured to that level. Right now we are addressing pretty fundamental definition and design issues, but eventually we’ll hopefully graduate to the next level.

A complete and up to date README for each microservice is essential to understanding what is going on with a service, and a robust OpenAPI definition is critical to breaking down the details of what each API delivers. When you aren’t part of each service’s development team it can be difficult to understand what each service does, but with an up to date README and OpenAPI, you can get up to speed pretty quickly. If an service is well documented via its README, and the API is well designed, and the surface area is reflected in it’s OpenAPI, you can go from not knowing what a service does to, understanding its value within hopefully minutes, not hours.

When each service possesses an OpenAPI it becomes possible to evaluate what they deliver at scale. You can take all APIs, their paths, headers, parameters, and schema and out them in different ways so that you can begin to paint a picture of what they deliver in aggregate. Bringing all the disparate services back together to perform together in a sort of monolith concert, while still acknowledging they all do their own thing independently. Allowing us to look at how many different service can be used in concert to deliver a single application, or potentially a variety of application instances. Thinking critically about each independent service, but more importantly how they all work together.

I feel like many groups are still struggling with decomposing their monolithic systems into separate services, and while some are doing so in a domain-driven way, few are beginning to invest in understanding how they move forward with services in concert to deliver on application needs. Many of the groups I’m working with are so focused on decomposing and tearing down, they aren’t thinking too critically about how they will make all of this begin work together again. I see monolith systems working like a massive church organ which take a lot of maintenance, and require a single (or handful) of knowledgeable operators to play. Where microservices are much more like an orchestra, where every individual player has a role, but they play in concert, directed by a conductor. I feel like most groups I’m talking with are just beginning the process of hiring a conductor, and have a bunch of musicians roaming around–not quite ready to play any significant productions quite yet.


Constraints Introduced By Supporting Standardized API Definitions and Schema

I’ve had a few API groups contact me lately regarding the challenges they are facing when it comes to supporting organizational, or industry-wide API definitions and schema. They were eager to support common definitions and schema that have been standardized, but were getting frustrated by not being able to do everything they wanted, and having to live within the constraints introduced by the standardized definitions. Which is something that doesn’t get much discussion by those of use who are advocating for standardization of APIs and schema.

Web APIs Come With Their Own Constraints
We all want more developers to use our APIs. However, with more usage, comes more responsibility. Also, to get more usage our APIs need to speak to a wider audience, something that common API definitions and schema will help with. This is why web APIs are working, because they speak to a wider audience, however with this architectural decision we are making some tradeoffs, and accepting some constraints in how we do our APIs. This is why REST is just one tool in our toolbox, so we can use the right tool, establish the right set of constraints, to allow our APIs to be successful. The wider our API toolbox, the wider the number options we will have available when it comes to how we design our APIs, and what schema we can employ

Allowing For Content Negotiation By Consumers
One way I’ve encouraged folks to help alleviate some of the pain around the adoption of common API definitions and schema is to provide content negotiation to consumers, allowing them to obtain the response they are looking for. If people want the standardized approaches they can choose those, and if they want something more precise, or custom they can choose that. This also allows API providers to work around the API standards that have become bottlenecks, while still supporting them where they matter. Having the best of both worlds, where you are supporting the common approach, but still able to do what you want when you feel it is important. Allowing for experimentation as well as standardization using the same APIs.

Participate In Standards Body and Process
Another way to help move things forward is to participate in the standards body that is moving an API definition or schema forward. Make sure you have a seat at the table so that you can present your case for where the problems are, and how to improve on the design, definition, and schema being evolved. Taking a lead in creating the world you want to see when it comes to API and schema standards, and not just sitting back being frustrated because it doesn’t do what you want it to do. Having a role in the standards body, and actively participating in the process isn’t easy, and it can be time consuming, but it can be worth it down the road and helping you better achieve your goals when it comes to your APIs operating as you aspect, as well as the wider community and industry you are serving.

Delivering APIs at scale won’t be easy. To reach a wide audience with your API it helps to be speaking a common vocabulary. This doesn’t always allow you to move as fast as you’d like, and do everything exactly as you envision. You will have to compromise. Operate within constraints. However, it can be worth it. Not just for your organization, but for the overall health of your community, and the industry you operate in. You never know, with a little patience, collaboration, and communication, you might learn even new approaches to defining your APIs and schema that you didn’t think about in isolation. Also, experimentation with new patterns will still be important, even while working to standardize things. In the end, a balance between standardized and custom will make the most sense, and hopefully alleviate your frustrations in moving things forward.


API Design, Either the Provider Does the Work or the Consumer Will Have To

I’m always fascinated by the API design debate, and how many entrenched positions there are when it comes to the right way of doing it. Personally, I don’t see any right way of doing it, I just see many different ways to put the responsibility on the provider or the consumer shoulders, or sharing the load between them. I spend a lot of time profiling APIs, crafting OpenAPI definitions that try to capture 100% of the surface area of an API. Something that is pretty difficult to do when you aren’t the provider, and you are just working from existing static documentation. It is just hard to find every parameter and potential value, exhaustively detailing what is possible when you use an API.

When designing APIs I tend to lean towards exposing the surface area of an API in the path. This is my personal preference for when I’m using an API, but I also do this to try and make APIs more accessible to non-developers. However, I regularly get folks who freak out at how many API paths I have, preferring to have the complexity at the parameter level. This conversation continues with the GraphQL and other query language folks who prefer to craft more complex queries that can be passed in the body, to define exactly what is desired. I do not feel there is a right way of doing this, but it does reflect what I said early about balancing the load between provider and consumer.

The burden for defining and designing the surface area of an API resides in the providers court–only the provider truly knows the entire surface area. Then I’d add that when you offload the responsibility to the consumer using GraphQL, you are limiting who will be able to craft a query, putting API access beyond the reach of business users, and many developers. I feel that exposing the surface area of an API in the URL makes sense to a lot of people, and puts it all out in the open. Unless you are going to provide every possible enum value for all parameters, this is the only way to make 100% of the surface area of an API visible and known. However, depending on the complexity of an API, this is something that can get pretty unwieldy pretty quickly, making parameters the next stop be defining and designing the surface area of your API.

I know my view of API design doesn’t sync with many API believers, across many different disciplines. I’m not concerned with that. I’m happy to hear all the pros and cons of any approach. My objective is to lower the bar for entry into the API game, not raise the bar, or hide the bar. I’m all for pushing API providers to be more responsible for defining the surface area of their API, and not just offloading it on consumers to do all the work, unless they implicitly ask for it. In the end, I’m just a fan of simple, elegantly designed APIs that are intuitive and well documented using OpenAPI. I want ALL APIs to be accessible to everyone, even non-developers. I want them accessible to developers, minimizing the load on them to understand what is happening, and what all the possibilities are. I just don’t want to spend too much time on-boarding with an API. I just want to go from discovery to exploration in as little time as possible.


GraphQL, Just Get Out Of My Way And Give Me What I Want

One of the arguments I hear for why API providers should be employing GraphQL is that they should just get out of developers way, and let them build their own queries so that they can just get exactly the data that they want. As an application developer we know what we want from your API, do not have us make many different calls, to multiple endpoints–just give us one API and let us ask for exactly what we need. It is an eloquent, logical argument when you operate and live within a “known bubble”, and you know exactly what you want. Now, ask yourself, will every API consumer know what they want? Maybe in some scenarios, every API consumer will know what they want, but in most situations, developers will not have a clue what they want, need, or what an API does.

This is where the GraphQL as a replacement for REST argument begins breaking down. In a narrowly defined bubble, where every developers knows the schema, knows GraphQL, and knows they want–GraphQL can make a lot of sense. In the autodidact alpha developer startup world this argument makes a lot of sense, and gets a lot of traction. However, not everyone lives in this world, and in this real world, API design can become very important. Helping people understand what is possible, learn the schema behind an API, and become more familiar with an API, until they have a better understanding of what they might want. I’m not saying GraphQL doesn’t have a place when you have a significant portion of your audience knowing what they want, I’m just saying that you shouldn’t leave everyone else behind.

To further turn this argument on its head, as a developer, if I know what I want, why make me build a query at all? Just give me a single URL with what I want! Don’t make me do the heavy lifting, and work to craft a query, just give me a single URL with minimal authentication, and let me get what I need in one click. Sure, it will take some time before you can craft enough URLs to meet everyone’s needs, but it will be worth it for those who you can. You can always craft new paths based upon requests, and yes, you can also augment your web APIs with a GraphQL endpoint for those neediest, most demanding developers who love building queries, and know exactly what they want. I guess my point is that there are a lot of definitions of knowing what you want, and how APIs can satisfy that, and not everyone will be in the same mindset as you are.

Now I have to bury in this last paragraph the fact that I’m not anti-GraphQL. I am anti-GraphQL being sold as a replacement for simpler, resource centered web APIs (aka REST). So if you are going to come at me with the Y U HATE GraphQL response–don’t. I’m just trying to show GraphQL believers that they are leaving some people behind with a GraphQL only approach, unless you are 100% sure that ALL your API consumers know GraphQL, know your schema, and now what they want. GraphQL is an important tool in the API toolbox, but it isn’t the one size fits all tool it is often sold as. After much contemplation, and working on the ground within enterprise groups, I am trying to put GraphQL to work in more use cases where it makes sense, and before I can do this I have to push back much of the misinformation that has been peddled about it, and undo the damage I’me seeing on the ground.


Helping Stoplight.io Get The Word Out About Version 3.0

I’ve been telling stories about what the Stoplight.io team has been building for a couple of years now. They are one of the few API service provider startups left that are doing things that interest me, and really delivering value to their API consumers. In the last couple of years, as things have consolidated, and funding cycles have shifted, there just hasn’t been the same amount of investment in interesting API solutions. So when Stoplight.io approached me to do some storytelling around their version 3.0 release, I was all in. Not just because I’m getting paid, but because they are doing interesting things, that I feel are worth talking about.

I’ve always categorized Stoplight.io as an API design solution, but as they’ve iterated upon the last couple of versions, I feel they’ve managed to find their footing, and are maturing to become one of the few truly API lifecycle solutions available out there. They don’t serve every stop along the API lifecycle, but they do focus on a handful of the most valuable stops, and most importantly, they have adopted OpenAPI as the core of what they do, allowing API providers to put Stoplight.io to work for them, as well as any other solutions that support OpenAPI at the core.

As far as the stops along the API lifecycle that they service, here is how I break them down:

  • Definitions - An OpenAPI driven way of delivering APIs, that goes beyond just a single definition, and allows you to manage your API definitions at scale, across many teams, and services.
  • Design - One of the most advanced API design GUI solutions out there, helping you craft and evolve your APIs using the GUI, or working directly with the raw JSON or YAML.
  • Virtualization - Enabling the mocking and virtualization of your APIs, allowing you to share, consume, and iterate on your interfaces long before you have deliver more costly code.
  • Testing - Provides the ability to not just test your individual APIs, but define and automate using detailed tests, assertions, and deliver a variety of scenarios to ensure APIs are doing what they should be doing.
  • Documentation - Allows for the publishing of simple, clean, but interactive documentation that is OpenAPI driven, and share with your team, and your API community through a central portal.
  • Discovery - Tightly integrated with Github, and maximizing an OpenAPI definition in a way that makes the entire API lifecycle discoverable by default.
  • Governance - Allows for teams to get a handle on the API design and delivery lifecycle, while working to define and enforce API design standards, and enforce a certain quality of service across the lifecycle.

They may describe themselves a little differently, but in terms of the way that I tag API service providers, these are the stops they service along the API lifecycle. They have a robust API definition and design core, with an attractive easy to use interface, which allows you to define, design, virtualize, document, test, and collaborate with your team, community, and other stakeholders. Which makes them a full API lifecycle service provider in my book, because they focus on serving multiple stops, and they are OpenAPI driven which allows every other stop to also be addressed using any other tools and service that supports OpenAPI–which is how you do business with APIs in 2018.

I’ve added API governance to what they do, because they are beginning to build in much of what API teams are going to need to begin delivering APIs at scale across large organizations. Not just design governance, but the model and schema management you’ll need, combined with mocking, testing, documentation, and the discovery that comes along with delivering APIs like Stoplight.io does. They reflect not just where the API space is headed with delivering APIs at scale, but what organizations need when it comes to bringing order to their API-driven, software development lifecycle in a microservices reality.

I have five separate posts that I will be publishing over the next couple weeks as Stoplight.io releases version 3.0 of their API development suite. Per my style I won’t always be directly about their product, but I’ll be talking about the solutions it deliver, but occasionally you’ll hear me mention them directly, because I can’t help it. Thanks to Stoplight.io for supporting what I do, and thanks to you my readers for checking out what Stoplight.io brings to the table. I think you are going to dig what they are up to.


API Industry Standards Negotiation By Media Type

I am trying to help push forward the conversation around the API definition for the Human Services Data Specification (HSDS) in a constructive way amidst a number of competing interests. I was handed a schema for sharing data about about organizations, locations, and services in a CSV format. I took this schema and exposed it with a set of API paths, keeping the flat file structure in tact, making no assumptions around how someone would need to access the data. I simply added the ability to get HSDS over the web as JSON–I would like to extend to be HTML, CSV, JSON, and XML, reaching as wide as possible audience with the basic implementation.

As we move forward discussions around HSDS and HSDA I’m looking to use media types to help separate the different types of access people are looking for using media types. I don’t want to leave folks who only have basic CSV export or import capabilities behind, but still wanted to provide guidance for exchanging HSDA over the web. To help organize higher levels of demand on the HSDS schema I’m going to break out into some specialized media types as well as the default set:

  • Human Services Data Specification (HSDS) - text/csv - Keeping data package basic, spreadsheet friendly, yet portable and exchangeable.
  • Human Services Data API (HSDA) - application/json and text/xml, text/csv, and text/html - Governing access at the most basic level, keeping true to schema, but allowing for content negotiation over the web.
  • Human Services Data API (HSDA) Hypermedia - (application/hal+json and application/hal+xml) - Allowing for more comprehensive responses to HSDA requests, addressing searching, filtering, pagination, and relationship linking between any HSDS returend.
  • Human Services Data API (HSDA) Bulk - (application/vnd.hsda.bulk) - Focusing on heavy system to system bulk transfers, and eventually syncing, backups, archives, and migrations. Dealing with the industrial levels of HSDA operations.
  • Human Services Data API (HSDA) Federated - (application/vnd.hsda.federated) - Allowing for a federated HSDA implementation that allows for the moderation of all POST, PUT, and DELETE by a distributed set of partners. Might also accompany the bulk system where partners can enable sync or bulk extraction for use in their own implementations.

I am working to define an industry level API standard. I am not operating an individual API implementation (well I do have several demos), so media types allows me to enable each vendor, or implementation to negotiate the type of content they desire. If they are interested developing single page applications or conversational interfaces they can offer up the hypermedia implementation. If they are system administrators and looking to load up large datasets, or extract large datasets, they can work within the HSDA Bulk realm. In the end I can see any one of these APIs being deployed in isolation, as well as all four of them living side by side, driving a single HSDS/A compliant platform.

This is all preliminary thought. All I have currently is HSDS, and HSDA returning JSON. I’m just brainstorming about what possible paths there are forward, and what I think the solution involves content negotiation, at the vendor, implementation, and consumption levels. Content type negotiation seems to provide a great way to break up and separate concerns, keeping simple things simple, and some of the more robust integrations segregated, while still working in concert. I’m always impressed by the meaningful impact something like content type has had on the web API design conversation, and always surprised when I learn new approaches to leveraging the content type header as part of API operations.


Misconceptions About What OpenAPI Is(nt) Still Slowing Conversations

I’ve been pushing forward conversations around my Human Services Data API (HSDA) work lately, and hitting some friction with folks around the finer technical details of the API. I feel the friction around these API conversations could be streamlined with OpenAPI, but with most folks completely unaware of what OpenAPI is and does, there is friction. Then for the handful of folks who do know what OpenAPI is and does, I’m seeing the common misconceptions about what they think it is slowing the conversation.

Let’s start with the folks who are unaware of what OpenAPI is. I am seeing two main ways that human services data vendors and implementations have conversations about what they need: 1) documentation, and 2) code. The last wave of HSDA feedback was very much about receiving a PDF or Word documentation about what is expected of an application and an API behind it. The next wave of conversations I’m having are very much here are some code implementations to demonstrate what someone is looking to articulate. Both very expensive waves of articulating and sharing what is needed for the future, or to develop a shared understanding. My goal throughout these conversations is to help folks understand that there are other more efficient, low costs ways to articulate and share what is needed–OpenAPI.

Beyond the folks who are not OpenAPI aware, the second group of folks who see OpenAPI as a documentation tool, or code generation tool. Interestingly enough a vantage point that is not very far evolved beyond the first group. Once you know what you have, you document it using OpenAPI, or you generate some code samples from it. Relinquishing OpenAPI to a very downstream tool, something you bring in after all the decisions are made. I had someone say to me, that OpenAPI is great, but we need a way to be having a conversation about each specific API request, the details of the that request, with a tangible response to that request–which I responded, “that is OpenAPI”. Further showing that I have a significant amount of OpenAPI education ahead of me, before we can efficiently use it within these conversations about moving the industry specification forward. ;-(

The reasons OpenAPI (fka Swagger) began as documentation, then code generation, then exploded as a mocking, and API design solution was the natural progression of things. I feel like this progression reflects how people are also learning about the API design life cycle, and in turn the OpenAPI specification itself. This is why the name change from Swagger to OpenAPI was so damaging in my opinion, as it is further confusing, and setting back these conversation for many folks. No use living in the past though! I am just going to continue doing the hard work of helping folks understand what OpenAPI is, and how it can help facilitate conversations about what an API is, what an API should do, and how it can be delivering value for humans–before any code is actually written, helping make sure everyone is on the same page before moving to far down the road.


HTTP Status Codes Are An Essential Part Of API Design And Deployment

It takes a lot of work provide a reliable API that people can depend on. Something your consumers can trust, and will provide them with consistent, stable, meaningful, and expected behavior. There are a lot of affordances built into the web, allowing us humans to get around, and make sense of the ocean of information on the web today. These affordances aren’t always present with APIs, and we need to communicate with our consumers through the design of our API at every turn.

One area I see IT and developer groups often overlook when it comes to API design and deployment are HTTP Status Codes. That standardized list of meaningful responses that come back with every web and API request:

  • 1xx Informational - An informational response indicates that the request was received and understood. It is issued on a provisional basis while request processing continues.
  • 2xx Success - This class of status codes indicates the action requested by the client was received, understood, accepted, and processed successfully.
  • 3xx Redirection - This class of status code indicates the client must take additional action to complete the request. Many of these status codes are used in URL redirection.
  • 4xx Client errors - This class of status code is intended for situations in which the client seems to have errored.
  • 5xx Server error - The server failed to fulfill an apparently valid request.

Without HTTP Status codes, application won’t every really know if their API request was successful or not, and even if an application can tell there was a failure, it will never understand why. HTTP Status Codes are fundamental to the web working with browsers, and apis working with applications. HTTP Status Codes should never be left on the API development workbench, and API providers should always go beyond just 200 and 500 for every API implementation. Without them, NO API platform will ever scale, and support any number of external integrations and applications.

The most important example I have of the importance of HTTP Status Codes I have in my API developers toolbox is when I was working to assist federal government agencies in becoming compliant with the White House’s order for all federal agencies to publish a machine readable index of their public data inventory of their agency website. As agencies got to work publishing JSON and XML (an API) of their data inventory, I got to work building an application that would monitor their progress, indexing the available inventory, and providing a dashboard the the GSA and OMB could use to follow their progress (or lack of).

I would monitor the dashboard in real time, but weekly I would also go through many of the top level cabinet agencies, and some of the more prominent sub agencies, and see if there was a page available in my browser. There were numerous agencies who I found had published their machine readable public data inventory, but had returned a variety of HTTP status codes other than 200-resulting in my monitoring application to consider the agency not compliant. I wrote several stories about HTTP Status Codes, in which the GSA, and White House groups circulated with agencies, but ultimately I’d say this stumbling block was one of the main reasons that cause this federated public data API project to stumble early on, and never gain proper momentum–a HUGE loss to an open and more observable federal government. ;-(

HTTP Status Codes aren’t just a nice to have thing when it comes to APIs, they are essential. Without HTTP Status Codes each application will deliver unreliable results, and aggregate or federated solutions that are looking to consume many APIs will become much more difficult and costly to develop. Make sure you prioritize HTTP Status Codes as part of your API design and deployment process. At the very least make sure all five layers of HTTP Status Codes are present in your release. You can always get more precise and meaningful with specific series HTTP status codes later on, but ALL APIs should be employing all five layers of HTTP Status Codes by default, to prevent friction and instability in every application that builds on top of your APIs.


Moving The Human Services API Specification From Version 1.1 to 1.2

I am preparing for the recurring governance meeting for the Open Referral Human Services Data API standard–which I’m the technical lead for. I need to load up every detail of my Human Services Data API work into my brain, and writing stories is how I do this. I need to understand where the definition is with v1.1, and encourage discussion around a variety of topics when it comes to version 1.2.

Constraints From Version 1.0 To v1.1 I wasn’t able to move as fast as I’d like from 1.0 to 1.1, resulting in me leaving out a number of features. The primary motivation to make sure as much of the version 1.1 of Human Services Data Specification (HSDS) was covered as possible–something I ended up doing horizontally with new API paths, over loading up the core paths of /organizations, /locations, and /services. There were too many discussion on the table regarding the scope and filtering of data, and schema for these core paths. Something which led to a discussion, about /search–resulting in me pushing off API design discussions on how to expand vertically at the core API path level to future versions.

There were just too many decisions to make at the API request and response level for me to make a decision in all the areas–warranting more discussion. Additionally, there were other API design discussion regarding operational, validation, and more utility APIs to discuss for inclusion in future versions expanding the scope and filtering discussions to the API path, and now API project level. In preparation for our regular governance meeting I wanted to run through all of the open API design issues, as well as additional projects the community needs to be thinking about.

API Design

As part of my Human Services Data API (HSDA) work we have opened up a pretty wide API design conversation regarding where the API definition could (should) be going. I’ve tried to capture the conversations going on across the Slack, and Google Group using GitHub issues for the HSDA GitHub repository. I will be focusing in on 16 of these issues for the current community discussions.

Versioning

We are moving forward the version of the API specification from 1.0 to 1.1. This version describes the API definition, to help quantify the compliance of any single API implementation. This is not guidance regarding how API providers should version their API–each implementation can articulate their compliance using an OpenAPI definition, or just in operation by being compliant. I purposely dodged providing versioning guidance of specific API implementations–until I could open up discussion around this subject.

If you need a primer on API versioning I recommend Troy Hunt’s piece which helps highlight:

  • URL: We put the API version into the URL: https://example.com/api/v1.1/organizations/
  • Custom request header: Using a header such as “api-version: 1.1”
  • Accept header: Using the accept header to specify the version “Accept: application/vnd.hsda.v1.1+json - which relates to content negotiation discussions.
  • No Versioning - We do not offer any versioning guidance and let each API implementation decide for themselves with no version being a perfectly acceptable answer.

API versioning discussions are always hot topics, and there is no perfect answer. If we are to offer API versioning guidance for HSDA compliant API providers I recommend putting it in the URL, not because it is the right answer, but it is the right answer for this community. It is easy to implement, and easy to understand. Although I’m not 100% convinced we should be offering guidance at all.

I would like to open it up to the community, and get more feedback from vendors, and implementors. I’m curious what folks prefer when they are building applications. This decision was one that was wrapped up with potential content negotiation, hypermedia, and schema scope discussions to make without more discussion.

Paths The API definition provides some basic guidance for HSDA implementations when it comes to naming API paths, providing a core set or resources, as well as sub-resources. There are a number of other API designs waiting in the wings to be hammered out, making more discussion around this relevant. How do we name additional API paths? Do we keep evolving a single stack of resources (expanding horizontally), or do we start grouping them and evolve using more sub-resources (expanding vertically)?

Right now, we are just sticking with a core set of paths for /contacts, /locations, /organizations, and /services, with /search somewhat of an outlier, or I guess wrapper. We have moved forward with sub-resource guidance, but should standard API design guidance when it comes to crafting new paths, as well as sub-paths, including the actions discussion below. This will be an ongoing discussion when it comes to API design across future versions, making it an evergreen thread that will just keep growing as the HSDA definition matures.

Verbs HTTP verbs usage was another aspect of the evolution of the HSDA specification from v1.0 to v1.1–the new specification uses its verbs. Making sure POST, PUT, and DELETE were used across all core resources, as well as sub-resources, making the entire schema open for reading and writing at all levels. This further expanded the surface of the API definition, making it manageable at all levels.

Beyond this expansion we need to open up the discussion regarding OPTIONS, and PATCH. Is there a need to provide partial updates using PATCH, and providing guidance on using OPTION for providing requirements associated with a resource, and the capabilities of the server behind the API. Also we should be having honest conversations about which verbs are available for sub-resources, especially when it comes to taking specific actions using HSDA paths. There is a lot more to discuss when it comes to HTTP verb usage across the HSDA specification.

Actions I want to prepare for the future when we have more actions to be taken, and talk about how we approach API design in the service of taking action against resources. Right now HTTP verbs are taking care of the CRUD features for all resources and sub-resources. While I don’t have any current actions in the queue to discus, we may want to consider this as part of the schema scope and filtering discussion–allowing API consumers to request partial, and complete representations of API resources using action paths. For example: /organization/simple, or /organizations/complete.

As the HSDA specification matures this question will come up more and more, as vendors, and implementations require more specialized actions to be taken against resources. Ideally, we are keeping resources very resource oriented, but from experience I know this isn’t always the case. Sometimes it becomes more intuitive for API developers to take action with simple, descriptive API paths, than adding more complexity with parameters, headers, and other aspects of the APIs design. I will leave this conversation open to help guide future versions, as well as the schema scope and filtering discussions.

Parameters Currently the numbers parameters in use for any single endpoint is pretty minimal. The core resources allow for querying, and sorting, but as of version 1.1, parameters are still pretty well-defined and minimal. The only path that has an extensive set of parameters is /search, which possesses category, email, keyword, language, lat_lng, location, org_name, page, per_page, radius, service_area, and status. I’d like to to continue the discussion about which parameters should be added to other paths, as well as used to help filter the schema, and other aspects of the API design conversation.

I’d like to open up the parameter discussion across all HSDA paths, but I’d also like to establish a way to regularly quantify how many paths are available, as well as how loaded they are with default values, and enumerators. I’d like to feed this into overall API design guidance, helping keep API paths reflecting a microservices approach to delivering APIs. Helping ensure HSDA services do one thing, and do it well, with the right amount of control over the surface area of the request and response of each API path.

Headers Augmenting the parameter discussion I want to make sure headers are an equal part of the discussion. They have the potential to play a role across several of these API design questions from versioning to schema filtering. They also will continue to emerge in authentication, management, security, and even sorting and content negotiation discussions.

It is common for there to be a lack of literacy in developer circles when it comes to HTTP headers. A significant portion of the discussion around header usage should always be whether of not we want to invest in HTTP literacy amongst implementors, and their developer communities, over leveraging other non-header approaches to API design. HTTP Headers are an important building block of the web that developers should understand, but educating developers around their use can be time intensive and costly when it comes to guidance.

Body There is an open discussion around how the body will be used across HSDA compliant implementations. Currently the body is default for POST and PUT, aka add and update. This body usage has been extended across all core resources, as well as sub-resource, requiring the complete, or sub resource representation to be part of each POST or PUT request.

There is no plan for any other APIs that will deviate from this approach, but we should keep this thread open to make sure we think about when the usage of the body is appropriate and when it might not be. We need to make sure that developers are able to effectively use the body, alongside headers, as well as parameters to get the desired results they are looking for.

Data Scope / Filtering Currently the only filtering beyond pagination that is available is the query parameter available on /contact, /organizations, /locations, and /services resources. After that search is where the heaviest data scope and filtering can be filtered and defined. We need to discuss the future of this. Should the core resources have similar capabilities to /search, or should /search be a first class citizen with the majority of the filtering capabilities?

There needs to be more discussion around how data will be available bia default, and how it will be filtered as part of each API request. Will search be carrying most of the load, or will each core resource be given some control when it comes to filtering data. Whatever the approach it needs to be standardized across all existing paths, as well as applied to new API designs, keeping data filtering consistent across all HSDA designs. As this comes into focus I will be making sure there is a guide that provides guidance when it comes to data filtering practices in play.

Schema Scope / Filtering This is one of the top issues being discussed as part of the migration from v1.1 to v1.2, regarding how to not just filter data that is returned as part of API responses, but how do you filter what schema gets returned as part of the response. When it came to v1.0 to v1.1 I didn’t want to shift the response structure so that I can reduce any breaking changes for existing Ohana implementations, and open up with the community regarding the best approach for allowing schema filtering.

My current recommendation when it comes to the filtering of how much or how little of the schema to return with each request is to allow for schema templates to be defined and named, then enable API consumers to specify which template they’d like returned. This should be specified through either through a prefer header, as part of the path structure as an action, or possibly through a parameter–all would accept the name of a schema template they desire (ie. simple, complete, etc.).

This approach to enabling schema templating could be applied at the GET, and could be also applied to POST or PUT requests. I personally recommend using a prefer header, but I also emphasize the ease of use, and ease of defining the usage as part of documentation, and the OpenAPI definition–which it might make sense to allow for schema enablement as pat of the path name as an action. I’ll leave it to the community to ultimately decide, as with the rest of this API design and project list, I’m just looking to provide guidance, and direction, built on the feedback of the community.

Path Scope / Filtering Next up in the scope and filtering discussion is regarding how we define, group, and present all available API paths included in the HSDA specification. With the current specification I see three distinct groups of API paths emerging: 1) core resources (/contacts, /organizations, /locations, /services), and 2) sub resources (/physical-address, /postal-address, /phones, and more), then the more utility aspects of meta data, taxonomy, and eventually webhooks.

When a new user lands on the API documentation, they should see the core resources, and not be burdened with the cognitive load associated sub resources or the more utility aspects of HSDA consumption. However, once ready more advanced API paths are available. The grouping and filtering of the API paths can be defined as part of the OpenAPI definitions for the API(s), as well as the APIs.json index for the site. This path grouping will allow for API consumers to limit scope and filter which API paths are available in the documentation, and possibly with SDKs, testing, and other aspects of integration.

There are additional API projects on the table that might warrant the addition of new API groups, beyond core resources, sub resources, and utility paths. The approval, feedback, and messaging discussions might require their own group, allowing them to be separated in documentation, code, testing, and other areas–reducing the load for new users, while expanding the opportunities for more advanced consumers. Eventually there might be a one to one connection between API path groups, and the API projects in the queue, allowing for different groups of APIs to be moved forward at different rates, and involve different groups of API consumers and vendors in the process.

Project Scope / Filtering Adding the fourth dimension to this scope / filtering discussion, I’m proposing we discuss how projects are defined and isolated, which can allow them to move forward at different rates, and be reflected in documentation, code, and other resources–allowing for filtering by consumers. This will drive the path filtering described above, but apply beyond just the API, and influencing documentation, SDKs, testing, monitoring, validation, and other aspects of API operations.

With this tier I am looking to decouple API projects from one another, and from the core specification. I want the core HSDS/A specification to stay focused on doing one thing well, but I’d like to establish a clear way to move forward complimentary groups of API definitions, and supporting tooling independently of the core specification. As we prepare to begin the journey from version 1.1 to 1.2, there are a number of significant projects on the table, and we need a way to isolate and decouple each additional API project in the same we we do with individual API resources–keeping them clearly defined, focused on specific problem set, and a buffet of resources that the community can choose where they’d like to participate.

Pagination This is the discussion around how results will be paginated, allowing for efficient or complete requests to be requested, and navigate through large volumes of human services data. We need to be discussing how we will evolve the current approach to using page= and per_page= to articulate pagination. This approach is a common, well understood way to allow developers to paginate, but we need to keep discussion open as we answer some of the other API design questions on the table.

The pagination topic overlaps with the hypermedia and response structure discussion. Eventually we may offer pagination as part of a response envelope, or relational links provided as part of the response when using JSON API, HAL, or other media type. Right now we will leave pagination as it is, but we should be thinking about how it will evolve alongside all other API design conversations in this list.

Sorting According to the current Ohana API implementation, which is the HSDA v1.0 definition, the guidance for sorting availability is as follows:

Except for location-based and keyword-based searches, results are sorted by location id in ascending order. Location-based searches (those that use the lat_lng or location parameter) are sorted by distance, with the ones closest to the search query appearing first. keyword searches are sorted by relevance since they perform a full-text search in various fields across various tables.

This guidance follows the API definition from version 1.0 to 1.2, but for future versions we should be considering providing further guidance regarding sorting of results. I’d like to get more feedback from the community on how they are providing data sorting capabilities for API consumes, or even as part of web and mobile applications.

Response Structure Right now the API responses for HSDA are pretty flat, like the schema. As part of the move from version 1.1 to 1.2 we need to be expanding on them, allowing for sub-resources to be included. This conversation will be heavily influenced by the schema filtering conversation above, as well as potentially the hypermedia and content negotiation discussions below. If we are gong to expand on the the schema being returned with API response we should be discussing all possible changes to the schema at once.

This conversation is meant to bring together the API schema filtering, hypermedia, and content negotiation conversations into a single discussion regarding the overall structure of a response, by default, as well as through filtering at the path, parameter, or header levels. I’d like to see HSDA responses expand to accommodate sub resources, but also the relationships between resources, as well as assisting with pagination, sorting, and other aspects of data, schema, and path filtering. I am looking to make sure the expansion of the response structure be more inclusive beyond just talk of sub resource access.

Hypermedia I really want to see a hypermedia fork in the HSDA definition, allowing more advanced users to negotiate and hypermedia version of the specification, instead of the more simpler, or even advanced default versions of the API. I recommend the adoption of HAL, Siren, or JSON API, as an alternate edition of an HSDA implementation. This expansion of the design of the HSDA specification would not impact the current version, but would allow for another dimension of API consumption and integration.

The relationships between human services data, and the semantic nature of the data really begs for a hypermedia solution. It would allow more meaningful API responses, and defining of relationships between resources, and emphasis of the taxonomy. I will be encouraging a separate, but complimentary version of HSDA that uses one of the leading hypermedia media types. I’d like to ensure there is community awareness of the potential of this approach, and support for investing in this as part of the HSDA design strategy.

Status Codes One of the areas of design around version 1.1 of the HSDA specification that was put off until future versions is guidance when it comes to API response status and error codes. Right now the OpenAPI definition for version 1.1 of the HSDA specification only suggests a 200 successful response, returning a reference to the appropriate HSDS schema. A project needs to be started that would provider further guidance for 300, 400, and 500 series status codes, as well as error responses.

Each HSDA path should provide guidance on all relevant HTTP Status Codes, but should also provide guidance regarding the error object schema returned as part of every possible API response. Helping standardize how errors are communicated, and provide further guidance on how to help API consumers navigate a solution. Currently there is no guidance when it comes to HTTP responses and errors, something that should be considered in version 1.2 or 1.3, depending on available resources.

Content Negotiation Augmenting other conversations around schema filtering, API response structure, and hypermedia, I want to make sure content negotiation stays part of the conversation. This aspect of API design will significantly impact API integration, and the evolution of the API specification. I want to make sure vendors, and other key actors are aware of it as an option, and can participate in the conversation regarding the different content types.

This conversation should begin with making CSV and HTML representations of the data available as part of the API response structure alongside the current JSON representations. API consumers should have the option to get raw HTML, CSV, and JSON through content negotiation–with JSON remaining as the default. Then the conversation should evolve to consider HSDA specific content type designation, as well as implementation of a leading hypermedia media type like JSON API, HAL, or Siren.

Content negotiation plays an important role in versioning the HSDA specification, as well as providing different dimensions for dealing with more complex integrations, as well as other aspects of operations like pagination, sorting, access to sub resources, other actions and even data, schema, and path filtering. Like headers, the mainstream developer community tends to not all be aware of content negotiation, but the benefits of adopting far outweigh the overhead involved with bringing developers up to speed.

That concludes the list of API design conversations that are occurring as part of the move from version 1.0 to 1.1, and will set the stage for the move towards 1.2, and beyond. It is a lot to consider, but it is a manageable amount for the community to think about as part of the version 1.1 feedback cycle. Allowing us to make a community informed decision regarding what should be focused on with each release–delivering what matters to the community.

API Projects

As the version 1.0 to 1.1 migration occurred several projects were identified, or suggested for consideration. I want to make sure all these projects are on the table as part of the evolution of HSDA, beyond just the current API design discussion occurring. These are the projects we added to the specification that are moving forward but will have varying degrees of impact on the core API definition.

Taxonomy There are two objects included in version 1.1 of the Human Services Data Specification (HSDS) that deal with taxonomy, the service_taxonomy object, and the core taxonomy object. I purposely left these aspects of the schema out of version 1.1 of HSDA. I wanted to see more discussion regarding taxonomy before we included in the specification. This is one of the first areas that influenced the above discussions regarding path scope and filtering, as well as project scope and filtering.

I’d like to see taxonomy exist as a separate set of paths, as a separate project, and out of the core specification. In addition to further discussion about what is HSDA taxonomy, I’d like to see more consideration regarding what exactly is acceptable levels of HSDA compliant taxonomy. Ideally, the definition allows for multiple taxonomy, and possibly even a direct relationship between the available content types and a taxonomy, allowing for a more meaningful API response.

I will leave open a Github issue to discuss taxonomy, and either move forward as entirely separate schema, or inclusion in version 1.2, 1.3 of the core HSDA definition. One aspect of this delay is to ensure that my awareness of available taxonomies is up to snuff to help provide guidance. I’m just not aware of everything out there, as well as an intimacy the leading taxonomies in use–I need to hear more from vendors and implementors on this subject before I feel confident in making any decision.

Metadata The metadata, and the meta_table_description objects v1.1 of HSDA were two elements I also left out of version 1.1 of HSDA. I felt like there should be more discussion around API management, logging, and other aspects of API operations that feed into this area, before we settled in on an API design to satisfy the HSDA metadata conversation. I’d like to hear more from human services implementors regarding what metadata they desire before we connect the existing schema to the API.

The metadata conversation overlaps with the approval and feedback project. There are aspects of logging and meta data collection and storage that will contribute to the transactional nature of any approval and feedback solution. There is also conversation going on regarding privacy concerns around API access to HSDS data, and logging, auditing that occurs at the metadata level. This thread covers these conversations, and is looking to establish a separate group of API paths, and separate project to drive documentation, and other aspects of API operations.

Approval & Feedback One of the projects that came up recently was about working to define the layer that allows developers to add, update, and delete data via the API. Eventually through the HSDA specification we to encourage 3rd party developers, and external stakeholders to help curate and maintain critical human services data within a community, through trusted partners.

HSDA allows for the reading and writing of organizations, locations, and services for any given area. I am looking to provide guidance on how API implementors can allow for POST, PUT, PATCH, and DELETE on their API, but require approval before any changing transaction is actually executed. Requiring the approval of an internal system administrator to ultimately give the thumbs up or thumbs down regarding whether or not the change will actually occur.

A process which immediately begs for the ability to have multiple administrators or even possibly involving external actors. How can we allow organizations to have a vote in approving changes to their data? How can multiple data stewards be notified of a change, and given the ability to approve or disprove, logging every step along the way? Allowing any change to be approved, reviewed, audited, and even rolled back. Making public data management a community affair, with observability and transparency built in by default.

I am doing research into different approaches to tackling this, ranging from community approaches like Wikipedia, to publish and subscribe, and other events or webhook models. I am looking for technological solutions to opening up approval to the API request and response structure, with accompanying API and webhook surface area for managing all aspects of the approval of any API changes. If you know of any interesting solutions to this problem I’d love to hear more, so that I can include in my research, future storytelling, and ultimately the specification for the Open Referral Human Services Data Specification and API.

Universal Unique IDs How will we allow for a universal unique ID system for all organizations, locations, and services, providing some provenance on the origin of the record. There is a solid conversation started about how to approach a universal ID system to live alongside, or directly as part of the core HSDA specification–depending on how we decide to approach project scope. Ideally, a universal ID system isn’t pat of being compliant, but could add a healthy layer of certification for some leading providers.

More research needs to be done regarding how universal IDs are handled in other industries. An exhaustive search needs to be conducted regarding any existing standards and guidance that can help direct this discussion. This approach to handling identifiers will have a significant impact on individual API implementations, as well as the overall HSDA definition. More importantly, it will set the stage for future HSDA aggregation and federation, allowing HSDA implementations to work together more seamlessly, and better serve end-uses.

Messaging I separated this project out of the approval and feedback project. I am suggesting that we isolate the messaging guidance for APIs, setting a standard for how you communicate within a single implementation as well across implementations. There are a number of messaging API standards and best practices available out there, as well as existing messaging APIs that are already in use by human services practitioners, including social channels like Facebook and Twitter, but also private channels like Slack.

HSDA compliant messaging channels should live as a separate project, and set of API path specifications. It should augment the core HSDA definition, overlaying with existing contact information, but it should also be dovetailed with new projects like approval and feedback system. More research needs to be conducted on existing messaging API standards, and leading channels that existing human services implementations and their software vendors are already using.

Webhooks I want to begin separate project for handling an important aspect of any API operations, and not just being their to receive requests, but can also push information externally, and respond to scheduled, or event driven aspects of API operations. Webhooks will play a role in the approval and feedback system, as well as the metadata, and messaging projects–eventually touching all aspects of the core HSDA resources, and separate projects.

Alongside the approval and feedback, universal id, and messaging projects, webhooks will set the stage for the future of HSDA, where individual city and regional implementations can work together, share information, federate and share responsibility in updates and changes. Webhooks will be how each separate implementation will work in concert, making the deliver of human services more real time, and orchestrated across providers, achieving API the vision of Open Referral founder Greg Bloom.

What Is Next? We have a lot on the table to discuss currently. We need to settle some pretty important API design discussions that will continue to have an impact on API operations for a long time. I want to help push forward the conversation around these API design discussions, and get these API projects moving forward in tandem. I need more input from the vendors, and the community around some of the pressing discussions, and then I’m confident we can settle in on what the final version 1.1 of the API specification should be, and what work we want to tackle as part of 1.2 and beyond. I’m feeling like with a little discussion we can find a path forward to reach 1.2 in the fall of 2017.


20K, 40K, 60K, and 80K Foot Levels Of Industry API Design Guidance

I am moving my Human Services Data API (HSDA) work forward and one of the top items on the list to consider as part of the move from version 1.1 to 1.2 is all around the scope of the API design portion of the standard. We are at a phase where the API design still very much reflects the Human Services Data Specification (HSDS)–basically a very CRUD (Create, Read, Update and Delete) API. With version 1.2 I need to begin considering the needs of API consumers a little more, looking to vendors and real world practitioners to help understand what the next version(s) of the API definition will/should contain.

The most prominent discussion in the move from version 1.1 to 1.2 centers around scope of API design at four distinct levels of this work, where we are looking to move forward a variety of API design concerns for a large group of API consumers:

  • Data Scope / Filtering - Discussions around how to filter data, allowing API consumers to search across the contents of any HSDA implementation, getting exactly the data they need, no more, no less.
  • Schema Scope / Filtering - Considering the design of simple, standard, or full schema responses that can specified using a prefer header, parameter, or path levels.
  • Path Scope / Filtering - How are API paths going to be group and organized, allowing a large surface area to be shared via documentation (APIs.json) in a way that new API consumers can start simple, advanced users can get what they need, and serving as many needs in between as we can.
  • Project Scope / Filtering - Adding the fourth dimension to this scope / filtering discussion, I’m proposing we discuss how projects are defined and isolated, which can allow them to move forward at different rates, and be reflected in documentation, code, and other resources–allowing for filtering by consumers, as well as prioritization by vendors involved in the API design discussion.

In short I have a large number of desires put on the table by vendors and practitioners. There is a mix of desire to load up as much functionality and API design guidance as we can at the single path level–meaning /organizations, /locations, and /services will allow you to get as much, or as little, as you desire. In this same conversation I have to defend the interests of newcomers, allowing them to easily learn about, and get what they need from these three distinct API paths, without loading them up with too much functionality. While also defend the long tail of needs for mobile, voice, and other leading edge application developers.

I’m looking at this discussion in these four dimensions, but I am trying to apply a horizontal or vertical approach in all four dimensions. Meaning, do I access or filter the amount of data I receive vertically with parameters or headers at the single API path level, or do I access and filter the amount of data I want horizontally across many separate API paths. The same logic applies to the API schema level with accessing vertically at the same path, by adding many subpaths, or do we approach horizontally with new paths. This continues at the API path, and project levels–how do discuss, develop, evolve, and allow API consumers to filter and only get at the API paths and projects they need–limiting the scope for new users, but meeting the demand of vendors, implementors, analysts, and other power consumers.

Aight, ok. Phew. I just needed to get that out. Not 100% sure it makes sense, but it is a framework I’m running with for a group conversation I’m having tomorrow–so we’ll see how it goes. One significant difference in this API design process from others is that it is not about any single API implementation. It is focused on moving forward a single API definition, or in the case of this discussion, many little API definition discussions, with lots of overlap, and under a single umbrella–to support thousands of API implementations. I’ll be publishing another piece shortly which zooms out to the 100K level of my HSDA work, where I’d consider this to be a little 20K (data), 40K (schema), 60K (path), and 80K (project) levels. I just needed to get a handle on this piece–if you actually read this far in the post, you are pretty geeky, and probably need a hobby (HSDA is mine).


Standardizing and Templatizing API Design Editor Validation Tips

I’ve been playing with Apicurio, the open source API design editor I’ve been waiting for, and saw a potential opportunity for design time collaboration, instruction, and feedback loop. When you are designing an API in Apicurio it gives you alerts based upon JSON schema validation of the underlying OpenAPI, providing a nice visual feedback loop–forcing you to complete your API definition until it properly validates.

Visual alerts and feedback based upon JSON schema validation isn’t really new or that interesting–you see it in the Swagger Editor, and many other JSON tooling. Where I see an opportunity is specifically when it comes to an open source visual API design editor like Apicurio, and when the JSON schema engine for the validation responses is opened up as part of the architecture. Allowing users to import and export JSON schema that goes beyond the default OpenAPI schema, which gets us to a minimum viable OpenAPI definition–while this is good, we can do better.

I’d like to see a marketplace of JSON schema to emerge helping API designers and architects push the completeness and precision of their OpenAPI definitions beyond the speed at which the core OpenAPI spec can move, and go in directions, and cover niche definitions that the core OpenAPI schema will never cover. I want to be able to load a schema that will help me push forward my API responses beyond just a default 200. I want to be able to load custom JSON schema crafted by API design experts who have more skills than I do, and learn from them. I want my API design editor to help me take my APIs to the next level, while also be pushing my API design skills forward along the way.

Apicurio takes does a good job at giving plain english responses to validation errors–much better than some tools I’ve used. You can click on the detail of each alert, to get more information about what is going on. I could see this entire structure opened up as part of Apicurio’s architecture allowing custom JSON schema templates that can be be imported, and made more informative with plain english responses, more detail, and even links to learn more about how you can improve your API. Turning the basic validation responses into more of an API design knowledge-base, and even structured curriculum walking you through best practices when it comes to API design, and what is working in the industry.

Just some thoughts as I’m playing with Apicurio. I’m very happy to see this API design editor emerge, and I have been having fun thinking about what is possible when it comes to the road map. I feel a common visual API design interface is an important part of the next step in the evolution of the API sector. Which is why I have been advocating for an open source API design editor that any API provider can use to design their APIs, and any API service provider can bake into their services and tooling–standardizing how we all craft and communicate around our API designs. The validation feedback loop during this phase will be an important channel for pushing API designers to do what is right, make their API definitions more complete, while educating them about common API design practices, and building a more literate API workforce.


Enhancing Your API SEO

One question I’m regularly getting from my readers is regarding how you can increase the search engine optimization (SEO) for your APIs–yes, API SEO (acronyms rule)! While we should be investing in API discoverability by embracing hypermedia early on, I feel in its absence we should also be indexing our entire API operations with APIs.json, and making sure we describe individual APIs using OpenAPI, the world of web APIs is still very hitched to the web, making SEO very relevant when it comes to API discoverability.

While I was diving deeper into “The API Platform”, a VERY forward leaning API deployment and management solution, I was pleased to see another mention of API SEO using JSON-LD (scroll down on the page). While I wish every API would adopt JSON-LD for their overall design, I feel we are going to have to piece SEO and discoverability together for our sites, as The API platform demonstrates. They provide a nice example of how you can paste a JSON-LD script into the the page of your API documentation, helping amplify some of the meaning and intent behind your API using JSON-LD + Schema.org.

I have been thinking about Schema.org’s relationship to API discovery for some time now, which is something I’m hoping to get more time to invest in further during 2017. I’d like to see Schema.org get more baked into API design, deployment, and documentation, as well as JSON-LD as part of underlying schema. To help build a bridge from where we are at, to where we need to be going, I’m going to explore how I can leverage OpenAPI tags to help autogenerate JSON-LD Schema.org tags as part of API documentation. While I’d love for everyone to just get the benefits of JSON-LD, I’m afraid many folks won’t have the bandwidth, and could use an assist from the API documentation solutions they are already using–making APIs more SEO friendly by default.

If you are starting a new API I recommend playing with “The API Platform”, as you get the benefits of Schema.org, JSON-LD, and MANY other SIGNIFICANT API concepts by default. Out of all of the API frameworks I’ve evaluated as part of my API deployment research, “The API Platform” is by far the most advanced when it comes to leading by example, and enabling healthy API design practices by default–something that will continue to bring benefits across all stops along the life cycle if you put to work in your operations.


A Community Approval Dimension When Adding, Updating, And Deleting Via API

One of the projects I’m working on as part my Human Services API work is trying to define the layer that allows developers to add, update, and delete data via the API. We ultimately want to empower 3rd party developers, and external stakeholders to help curate and maintain critical human services data within a community, through trusted partners.

The Human Services API allows for the reading and writing of organizations, locations, and services for any given area. I am looking to provide guidance on how API implementors can allow for POST, PUT, PATCH, and DELETE on their API, but require approval before any changing transaction is actually executed. Requiring the approval of an internal system administrator to ultimately give the thumbs up or thumbs down regarding whether or not the change will actually occur.

A process which immediately begs for the ability to have multiple administrators or even possibly involving external actors. How can we allow organizations to have a vote in approving changes to their data? How can multiple data stewards be notified of a change, and given the ability to approve or disprove, logging every step along the way? Allowing any change to be approved, reviewed, audited, and even rolled back. Making public data management a community affair, with observability and transparency built in by default.

I am doing research into different approaches to tackling this, ranging from community approaches like Wikipedia, to publish and subscribe, and other events or webhook models. I am looking for technological solutions to opening up approval to the API request and response structure, with accompanying API and webhook surface area for managing all aspects of the approval of any API changes. If you know of any interesting solutions to this problem I’d love to hear more, so that I can include in my research, future storytelling, and ultimately the specification for the Open Referral Human Services Data Specification and API.


The GSA API Standards With A Working Prototype API And Portal

One way to help API developers understand API design is to provide them with a design guide, helping set a standard for how APIs should be designed across an organization or group. Another way to help developers follow best practices when it comes to API design is to provide them with a working example they can follow when developing their API(s). In my experience people learn API design best practices through following what they know–emulating what they see.

Hang on to that thought, cause now I’m going to blow your mind. Guess how API providers learn how to provide API design guide and working examples? By showcasing working examples of companies, institutions, and government agencies publishing API design guides, working APIs, and portal prototypes. So that other API providers can learn by example! BOOM! Mind blown. :-) An example of this can be found over at the General Service Administration (GSA), with their API standards guide, API prototype, and forkable API portal and documentation–complete with the essential API building blocks.

The GSA’s simple approach to providing a working example of their API standards is refreshing. They have taken an existing GSA data set and launched a prototype API, then they published the API in a complete, working API developer portal with all the essential building blocks of a basic API presence.

  • Getting Started - What you need to get started with the API.
  • Documentation - OpenAPI driven interactive API documentation.
  • Schema - A reference of the fields / schema used for API responses.
  • FAQ - Some basic questions about the API, and more importantly the API prototype.
  • Contact Info - Using Github issues for support, with an accompanying email.

The whole things runs on Github so it is forkable. What is great, is that they also have the source code for the API on Github, essentially providing a working, forkable representation of what is expected in the GSA API design guide. This is how you plant seeds for consistent API design across an organization. An API design guide setting standard, with a working example of what you would like to see as the finished product.

I’m really getting into this approach to defining the API conversation within an organization. I’m working with a handful of large organizations right now to develop API prototypes, evolve my current API portal definition, as well as working to influence the GSA’s approach. If you aren’t familiar with the General Services Administration (GSA), a significant portion of their mission is dedicated to delivering technology services to the over 430 departments, agencies, and sub-agencies in the federal government. The GSA API design guide, prototype, and portal provide a working example other agencies can follow when defining, designing, deploying, and managing their API presence–important stuff.


The Yes I Would Like To Talk Button When Signing Up For An API Platform

There are never enough hours in the day. I have an ever growing queue of APIs and API related services that I need to play with for the first time, or just make sure and take another look at. I was FINALLY making time to take another look at the RepreZen API Studio again when I saw that they were now supporting OpenAPI 3.0.

I am still driving it around the block, but I thought the second email I got from them when I was signing up was worth writing about. I had received a pretty standard getting started email from them, but then I got a second email from Miles Daffin, their product manager, reminding me that I can reach out, and providing me with a “Yes I Would Like To Talk Button”. I know, another pretty obvious thing, but you’d be surprised how a little thing like this can actually break us from our regular isolated workspace, and make the people behind an API, or API related service more accessible. The email was pretty concise and simple, but what caught my eye when I scanned was the button.

Anyways, just a small potential building block for your API communication strategy. I’ll be adding the list I am cultivating. I’m not a big hard sell kind of guy, and I appreciate soft outreach like this–leaving it up to me when I want to connect. I’ll keep playing with RepreZen API Studio and report back when I have anything worth sharing. I just wanted to make sure this type of signup email was included in my API communication research.


Revisiting GraphQL As Part Of My API Toolbox

I’ve been reading and curating information on GraphQL as part of my regular research and monitoring of the API space for some time now. As part of this work, I wanted to take a moment and revisit my earlier thoughts about GraphQL, and see where I currently stand. Honestly, not much has changed for me, to move me in one direction or another regarding the popular approach to providing API access to data and content resources.

I still stand by my cautionary advice for GraphQL evangelist regarding not taking such an adversarial stance when it comes to the API approach, and I feel that GraphQL is a good addition to any API architect looking to have a robust and diverse API toolbox. Even with the regular drumbeat from GraphQL evangelists, and significant adoption like the Github GraphQL API I am not convinced it is the solution for all APIs and is a replacement for simple RESTful web API design.

My current position is that the loudest advocates for GraphQL aren’t looking at the holistic benefits of REST, and too wrapped in ideology, which is setting them up for similar challenges that linked data, hypermedia, and even early RESTafarian practitioners have faced. I think GraphQL excels when you have a well educated, known and savvy audience, who are focused on developed web and mobile applications–especially the latest breed of single page applications (SPA). I feel like in this environment GraphQL is going to rock it, and help API providers reduce friction for their consumers.

This is why I’m offering advice to GraphQL evangelists to turn down the anti-REST, and complete replacement/alternative for REST–it ain’t helping your cause and will backfire for you. You are better to off educating folks about the positive, and being honest about the negatives. I will keep studying GraphQL, understanding the impact it is making, and keeping an eye on important implementations. However, when it comes to writing about GraphQL you are going to see me continuing to hold back, just like I did when it came to hypermedia and linked data because I prefer not to be in the middle of ideological battles in the API space. I prefer showcasing the useful tools and approaches that are making a significant impact across a diverse range of API providers–not just echoing what is coming out of a handful of big providers, amplified by vendors and growth hackers looking for conversions.


Recent Api Paths


Apicurio Is The Open Source Api Editor I Was Looking For


Every API Should Begin With A Github Repository

I’m working on my API definition and design strategy for my human services API work, and as I was doing this Box went all in on Opening, adding to the number of API providers I track on who not just have an OpenAPI but they also use Github as the core management for their API definition.

Part of my API definition and design advice for human service API providers, and the vendors who sell software to them is that they have an OpenAPI and JSON schema defined for their API, and share this either publicly or privately using a Github repository. When I evaluate a new vendor or service provider as part of the Human Services Data API (HSDA) specification I’m beginning to require that they share their API definition and schema using Github–if you don’t have one, I’ll create it for you. Having a machine-readable definition of the surface area of an API, and the underlying schema in a Github repo I can checkout, commit to, and access via an API is essential.

Every API should begin with a Github repository in my opinion, where you can share the API definition, documentation, schema, and have a conversation around these machine readable docs using Github issues. Approaching your API in this way doesn’t just make it easier to find when it comes to API discovery, but it also makes your API contract available at all steps of the API design lifecycle, from design to deprecation.


Craft Your API Design Guide So You Can Move To Other Areas of The Lifecycle

I am working on an API definition and design guide for my human services API work, helping establish a framework for approaching API design as part of the human services data and API specification, but also for implementers to follow in their own individual deployments. Every time I work on the subject of API design, I’m reminded of how far behind the API sector is when it comes to standardizing what it is we do.

Every month or so I see a new company publicly share their API design guide. When they do my friend Arnaud always adds to his API Stylebook, adding it to the wealth of information available in his work. I’m happy to see each API design guide release, but in reality, ALL API providers should have an API design guide, and they should also be open to publishing it publicly, showing their consumers they have their act together, and sharing with the wider API community the best practices in play.

The lack of companies sharing their API design practices and their API definitions is why we have such a deficiency when it comes to common API patterns in use. It is why we have so many variations of web APIs, as well as the underlying schema. We have an API industry because early practitioners like SalesForce, Amazon, eBay, Flickr, Delicious, Twitter, Youtube, and others were open with their API operations. People emulate what they see and know. Each wave of the API sector depends on the previous wave sharing what they do publicly–it is how this all works.

To demonstrate even further about how deficient we are, I do not find companies sharing their guides for API deployment, management, testing, monitoring, clients, and other stops along the API lifecycle. I’m glad we are seeing an uptick in the number of API design guides, but we need this practice to spread to every other stop. We need successful providers to share how they deploy their APIs, and when any company hires a new developer, you should ALWAYS be given a standard guide for deploying, managing, testing, as well as designing APIs.

It’s not rocket science, and honestly, it’s not even technical. It just means pausing for a moment, thinking about how we approach each stop in the API lifecycle, writing up an overview, publishing, and sharing it with API stakeholders, and even the wider API community. Every company doing APIs in 2017 should be crafting an API design guide so you can get to work on guides for the other areas of your lifecycle, thinking through and standardizing your approach, and making it known to every person involved–ideally, you are also being very public about all of this, and sharing your work with me and Arnaud, so we can get the word out about the good stuff you are up to! ;-)


How Twitter Handles Sorting For Their API

I was looking into some of the common approaches by API providers for sorting of data in API responses. I’m not in the business of finding the right answer, I am in the business of finding successful examples from APIs(brands) that people are familiar with–I thought Twitter’s page in their API documentation dedicated to sorting was worth noting.

When you craft your Twitter API request you just append sort_by=[attribute name]-[asc/desc] where the attribute is a valid attribute that is returned in the JSON of your GET request. An example of this is using ?name-asc to sort by name alphabetically or ?name-desc to sort in reverse. Providing a pretty basic approach that API providers can consider when designing sort functionality in their API.

I’ll be documenting all the approaches I find from known providers, developing a catalog of options for a variety of use cases. I’ll also spend some time looking at how GraphQL tackles the problem, providing a much more holistic approach to managing data using APIs. When I go through my API design checklist each round, I like adding a variety of diverse tooling to it, based upon examples I find from the strongest API providers. Healthy diversity in your API toolbox will be increasingly important to assist in tackling a variety of data, content, or algorithmic challenges.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.