RSS

API Design News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API design conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is designing their APis and the underlying schema used as part of their requests and responses.

A Community Approval Dimension When Adding, Updating, And Deleting Via API

One of the projects I’m working on as part my Human Services API work is trying to define the layer that allows developers to add, update, and delete data via the API. We ultimately want to empower 3rd party developers, and external stakeholders to help curate and maintain critical human services data within a community, through trusted partners.

The Human Services API allows for the reading and writing of organizations, locations, and services for any given area. I am looking to provide guidance on how API implementors can allow for POST, PUT, PATCH, and DELETE on their API, but require approval before any changing transaction is actually executed. Requiring the approval of an internal system administrator to ultimately give the thumbs up or thumbs down regarding whether or not the change will actually occur.

A process which immediately begs for the ability to have multiple administrators or even possibly involving external actors. How can we allow organizations to have a vote in approving changes to their data? How can multiple data stewards be notified of a change, and given the ability to approve or disprove, logging every step along the way? Allowing any change to be approved, reviewed, audited, and even rolled back. Making public data management a community affair, with observability and transparency built in by default.

I am doing research into different approaches to tackling this, ranging from community approaches like Wikipedia, to publish and subscribe, and other events or webhook models. I am looking for technological solutions to opening up approval to the API request and response structure, with accompanying API and webhook surface area for managing all aspects of the approval of any API changes. If you know of any interesting solutions to this problem I’d love to hear more, so that I can include in my research, future storytelling, and ultimately the specification for the Open Referral Human Services Data Specification and API.


The GSA API Standards With A Working Prototype API And Portal

One way to help API developers understand API design is to provide them with a design guide, helping set a standard for how APIs should be designed across an organization or group. Another way to help developers follow best practices when it comes to API design is to provide them with a working example they can follow when developing their API(s). In my experience people learn API design best practices through following what they know–emulating what they see.

Hang on to that thought, cause now I’m going to blow your mind. Guess how API providers learn how to provide API design guide and working examples? By showcasing working examples of companies, institutions, and government agencies publishing API design guides, working APIs, and portal prototypes. So that other API providers can learn by example! BOOM! Mind blown. :-) An example of this can be found over at the General Service Administration (GSA), with their API standards guide, API prototype, and forkable API portal and documentation–complete with the essential API building blocks.

The GSA’s simple approach to providing a working example of their API standards is refreshing. They have taken an existing GSA data set and launched a prototype API, then they published the API in a complete, working API developer portal with all the essential building blocks of a basic API presence.

  • Getting Started - What you need to get started with the API.
  • Documentation - OpenAPI driven interactive API documentation.
  • Schema - A reference of the fields / schema used for API responses.
  • FAQ - Some basic questions about the API, and more importantly the API prototype.
  • Contact Info - Using Github issues for support, with an accompanying email.

The whole things runs on Github so it is forkable. What is great, is that they also have the source code for the API on Github, essentially providing a working, forkable representation of what is expected in the GSA API design guide. This is how you plant seeds for consistent API design across an organization. An API design guide setting standard, with a working example of what you would like to see as the finished product.

I’m really getting into this approach to defining the API conversation within an organization. I’m working with a handful of large organizations right now to develop API prototypes, evolve my current API portal definition, as well as working to influence the GSA’s approach. If you aren’t familiar with the General Services Administration (GSA), a significant portion of their mission is dedicated to delivering technology services to the over 430 departments, agencies, and sub-agencies in the federal government. The GSA API design guide, prototype, and portal provide a working example other agencies can follow when defining, designing, deploying, and managing their API presence–important stuff.


The Yes I Would Like To Talk Button When Signing Up For An API Platform

There are never enough hours in the day. I have an ever growing queue of APIs and API related services that I need to play with for the first time, or just make sure and take another look at. I was FINALLY making time to take another look at the RepreZen API Studio again when I saw that they were now supporting OpenAPI 3.0.

I am still driving it around the block, but I thought the second email I got from them when I was signing up was worth writing about. I had received a pretty standard getting started email from them, but then I got a second email from Miles Daffin, their product manager, reminding me that I can reach out, and providing me with a “Yes I Would Like To Talk Button”. I know, another pretty obvious thing, but you’d be surprised how a little thing like this can actually break us from our regular isolated workspace, and make the people behind an API, or API related service more accessible. The email was pretty concise and simple, but what caught my eye when I scanned was the button.

Anyways, just a small potential building block for your API communication strategy. I’ll be adding the list I am cultivating. I’m not a big hard sell kind of guy, and I appreciate soft outreach like this–leaving it up to me when I want to connect. I’ll keep playing with RepreZen API Studio and report back when I have anything worth sharing. I just wanted to make sure this type of signup email was included in my API communication research.


Revisiting GraphQL As Part Of My API Toolbox

I’ve been reading and curating information on GraphQL as part of my regular research and monitoring of the API space for some time now. As part of this work, I wanted to take a moment and revisit my earlier thoughts about GraphQL, and see where I currently stand. Honestly, not much has changed for me, to move me in one direction or another regarding the popular approach to providing API access to data and content resources.

I still stand by my cautionary advice for GraphQL evangelist regarding not taking such an adversarial stance when it comes to the API approach, and I feel that GraphQL is a good addition to any API architect looking to have a robust and diverse API toolbox. Even with the regular drumbeat from GraphQL evangelists, and significant adoption like the Github GraphQL API I am not convinced it is the solution for all APIs and is a replacement for simple RESTful web API design.

My current position is that the loudest advocates for GraphQL aren’t looking at the holistic benefits of REST, and too wrapped in ideology, which is setting them up for similar challenges that linked data, hypermedia, and even early RESTafarian practitioners have faced. I think GraphQL excels when you have a well educated, known and savvy audience, who are focused on developed web and mobile applications–especially the latest breed of single page applications (SPA). I feel like in this environment GraphQL is going to rock it, and help API providers reduce friction for their consumers.

This is why I’m offering advice to GraphQL evangelists to turn down the anti-REST, and complete replacement/alternative for REST–it ain’t helping your cause and will backfire for you. You are better to off educating folks about the positive, and being honest about the negatives. I will keep studying GraphQL, understanding the impact it is making, and keeping an eye on important implementations. However, when it comes to writing about GraphQL you are going to see me continuing to hold back, just like I did when it came to hypermedia and linked data because I prefer not to be in the middle of ideological battles in the API space. I prefer showcasing the useful tools and approaches that are making a significant impact across a diverse range of API providers–not just echoing what is coming out of a handful of big providers, amplified by vendors and growth hackers looking for conversions.


Recent API Paths

I was learning about a new API path for the document platform Box, that was designed specifically for showing recently updated objects. I think that the concept of having API paths dedicated to showing recently changed elements makes sense, helping eliminate the need for API consumers to learn about which parameters are needed to achieve their immediate goals, helping expose useful aspects of the platform through API design.

As an API consumer, it can be a lot of work to get at the meaningful and relevant context of an API Platform if you do not know all the right knobs and levers to pull on. This is where API design comes in handy, helping surface the most relevant and contextual aspects of what is going on. While this may not be the approach all API providers should be taking, especially if you have a savvy API consumer audience, in times when you have a wider, more unknown audience who may be looking to just get at what is going on with their users platform integrations, this type of API design can be a good way to reduce friction.

I am writing about the concept of recent API paths, to think about how this can be added to my API design toolbox. While not a REST, hypermedia (wait maybe) design pattern, I think that in some scenarios, having a recent API path makes a lot of sense. Some platforms are only relevant for a specific time period, and the rest of what is happening can be dealt with through search, taxonomy, and other ways of archiving. Once I write about an API design pattern I come across it kinds of activates it in my brain and makes it something I will be thinking about as I review other APIs, as well as design my own APIs, and who knows, maybe it will do the same for you!


Apicurio Is The Open Source Visual API Design Editor I Was Looking For

I’ve been wanting someone to create an open source API editor for some time, and now the folks over at Red Hat / 3Scale have delivered one called Apicurio. It is a web-based Angular2 app, for visually designing your APIs using OpenAPI, with a Github focus.

Apicurio is that blend of visual designer, and code view that I was hoping for, letting you manage all your paths, and definitions using OpenAPI via Github. It doesn’t have all the bells and whistles I’d love to see in my perfect API design editor, but they are just getting going, and I think it is an excellent start.

Using Apicurio you can start a new API, or begin with an existing API by importing an OpenAPI (as it should be). When you are editing each path, it breaks up your verbs and has grayed out placeholders for adding any verbs you are missing–great inline API design literacy, helping folks quickly expand the design of their API. It is a slick, intuitive, API design interface which takes seconds to grasp what’s going on and begin expanding the surface area of an API.

After making changes you can save it to Github, helping center the API design and definition process around Github, which can then be applied to the center of any API lifecycle. I really like how the design tool is a visual interface but you can always get at the machine readable definition behind, and edit it directly if you prefer. I feel like it is an interface that both developers and non-developers can put to work while still keeping OpenAPI at the center.

You can see where they are headed with APIcurio by checking out the roadmap, as well as hints in the interface–like grayed out buttons for testing and documentation. I can see the tool turning into a full blown lifecycle management solutions, allowing you to design, deploy, manage, document, test, and many other useful areas along the API lifecycle. The OpenAPI definition and a Github repo core will all help set the stage for Apicurio delivering across the API lifecycle for developers.

I am adding Apicurio to my API design research, and will be playing with, and keeping an eye on where it is going. It feels like the core API design editor the API space needs right now. We need a lot of standardizing at the API design layer at the moment and having a consistent visual editor for API service providers, and individual API designers to employ will go a long way in onboarding new users, and help existing users deploy APIs more consistently.

When it comes to API documentation, there isn’t an excuse for most API providers and service providers to be hand-rolling their own API documentation–there is a good selection of open source solutions out there. Now there is movement in the same way when it comes to visual API designers that keep OpenAPI at the core. It will take a couple years to play out, but having an open source API design editor will have a significant impact on the community, similar to what open source interactive API documentation, and API management solutions have done over the last 5+ years.

Nice work Red Hat / 3Scale!


Every API Should Begin With A Github Repository

I’m working on my API definition and design strategy for my human services API work, and as I was doing this Box went all in on Opening, adding to the number of API providers I track on who not just have an OpenAPI but they also use Github as the core management for their API definition.

Part of my API definition and design advice for human service API providers, and the vendors who sell software to them is that they have an OpenAPI and JSON schema defined for their API, and share this either publicly or privately using a Github repository. When I evaluate a new vendor or service provider as part of the Human Services Data API (HSDA) specification I’m beginning to require that they share their API definition and schema using Github–if you don’t have one, I’ll create it for you. Having a machine-readable definition of the surface area of an API, and the underlying schema in a Github repo I can checkout, commit to, and access via an API is essential.

Every API should begin with a Github repository in my opinion, where you can share the API definition, documentation, schema, and have a conversation around these machine readable docs using Github issues. Approaching your API in this way doesn’t just make it easier to find when it comes to API discovery, but it also makes your API contract available at all steps of the API design lifecycle, from design to deprecation.


Craft Your API Design Guide So You Can Move To Other Areas of The Lifecycle

I am working on an API definition and design guide for my human services API work, helping establish a framework for approaching API design as part of the human services data and API specification, but also for implementers to follow in their own individual deployments. Every time I work on the subject of API design, I’m reminded of how far behind the API sector is when it comes to standardizing what it is we do.

Every month or so I see a new company publicly share their API design guide. When they do my friend Arnaud always adds to his API Stylebook, adding it to the wealth of information available in his work. I’m happy to see each API design guide release, but in reality, ALL API providers should have an API design guide, and they should also be open to publishing it publicly, showing their consumers they have their act together, and sharing with the wider API community the best practices in play.

The lack of companies sharing their API design practices and their API definitions is why we have such a deficiency when it comes to common API patterns in use. It is why we have so many variations of web APIs, as well as the underlying schema. We have an API industry because early practitioners like SalesForce, Amazon, eBay, Flickr, Delicious, Twitter, Youtube, and others were open with their API operations. People emulate what they see and know. Each wave of the API sector depends on the previous wave sharing what they do publicly–it is how this all works.

To demonstrate even further about how deficient we are, I do not find companies sharing their guides for API deployment, management, testing, monitoring, clients, and other stops along the API lifecycle. I’m glad we are seeing an uptick in the number of API design guides, but we need this practice to spread to every other stop. We need successful providers to share how they deploy their APIs, and when any company hires a new developer, you should ALWAYS be given a standard guide for deploying, managing, testing, as well as designing APIs.

It’s not rocket science, and honestly, it’s not even technical. It just means pausing for a moment, thinking about how we approach each stop in the API lifecycle, writing up an overview, publishing, and sharing it with API stakeholders, and even the wider API community. Every company doing APIs in 2017 should be crafting an API design guide so you can get to work on guides for the other areas of your lifecycle, thinking through and standardizing your approach, and making it known to every person involved–ideally, you are also being very public about all of this, and sharing your work with me and Arnaud, so we can get the word out about the good stuff you are up to! ;-)


How Twitter Handles Sorting For Their API

I was looking into some of the common approaches by API providers for sorting of data in API responses. I’m not in the business of finding the right answer, I am in the business of finding successful examples from APIs(brands) that people are familiar with–I thought Twitter’s page in their API documentation dedicated to sorting was worth noting.

When you craft your Twitter API request you just append sort_by=[attribute name]-[asc/desc] where the attribute is a valid attribute that is returned in the JSON of your GET request. An example of this is using ?name-asc to sort by name alphabetically or ?name-desc to sort in reverse. Providing a pretty basic approach that API providers can consider when designing sort functionality in their API.

I’ll be documenting all the approaches I find from known providers, developing a catalog of options for a variety of use cases. I’ll also spend some time looking at how GraphQL tackles the problem, providing a much more holistic approach to managing data using APIs. When I go through my API design checklist each round, I like adding a variety of diverse tooling to it, based upon examples I find from the strongest API providers. Healthy diversity in your API toolbox will be increasingly important to assist in tackling a variety of data, content, or algorithmic challenges.


Considering Using HTTP Prefer Header Instead Of Field Filtering For This API

I am working my way through a variety of API design considerations for the Human Services Data API (HSDA)that I’m working on with Open Referral. I was working through my thoughts on how I wanted to approach the filtering of the underlying data schema of the API, and Shelby Switzer (@switzerly) suggested I follow Irakli Nadareishvili’s advice and consider using RFC 7240 -the Prefer Header for HTTP, instead of some of the commonly seen approaches to filtering which fields are returned in an API response.

I find this approach to be of interest for this Human Services Data API implementation because I want to lean on API design, over providing parameters for consumers to dial in the query they are looking for. While I’m not opposed to going down the route of providing a more parameter based approach to defining API responses, in the beginning I want to carefully craft endpoints for specific use cases, and I think the usage of the HTTP Prefer Header helps extend this to the schema, allowing me to craft simple, full, or specialized representations of the schema for a variety of potential use cases. (ie. mobile, voice, bot)

It adds a new dimension to API design for me. Since I’ve been using OpenAPI I’ve gotten better at considering the schema alongside the surface area of the APIs I design, showing how it is used in the request and response structure of my APIs. I like the idea of providing tailored schema in responses over allowing consumers to dynamically filter the schema that is returned using request parameters. At some point, I can see embracing a GraphQL approach to this, but I don’t think that human service data stewards will always know what they want, and we need to invest in a thoughtful set design patterns that reflect exactly the schema they will need.

Early on in this effort, I like allowing API consumers to request minimal, standard or full schema for human service organizations, locations, and services, using the Prefer header, over adding another set of parameters that filter the fields–it reduces the cognitive load for them in the beginning. Before I introduce any more parameters to the surface area, I want to better understand some of the other aspects of taxonomy and metadata proposed as part of HSDS. At this point, I’m just learning about the Prefer header, and suggesting it as a possible solution for allowing human services API consumers to have more control over the schema that is returned to them, without too much overhead.


My API Design Checklist For This Version Of The Human Services Data API

I am going through my API design checklist for the Human Services Data API work I am doing. I’m trying to make sure I’m not forgetting anything before I propose a v1.1 OpenAPI draft, so I pulled together a simple checklist I wanted to share with other stakeholders, and hopefully also help keep me focused.

First, to support my API design work I got to work on these areas for defining the HSDS schema and the HSDA definition:

  • JSON Schema - I generated a JSON Schema from the HSDS documentation.
  • OpenAPI - I crafted an OpenAPI for the API, generating GET, POST, PUT, and DELETE methods for 100% of the schema, and reflective its use in the API request and response.
  • Github Repo - I published it all in a Github repository for sharing with stakeholders, and programmatic usage across any tooling and applications being developed.

Then I reviewed the core elements of my API design to make sure I had everything I wanted to cover in this cycle, with the resources we have:

  • Domain(s) - Right now I’m going with api.example.com, and developer.example.com for the portal.
  • Versioning - I know many of my friends are gonna give me grief, but I’m putting versioning in the URL, keeping things front and center, and in alignment with the versioning of the schema.
  • Paths - Really not much to consider here as the paths are derived from the schema definitions, providing a pretty simple, and intuitive design for paths–will continue adding guidance for future APIs.
  • Verbs - A major part of this release was making sure 100% of the surface area of the HSDS schema add the ability to POST, PUT, and DELETE, as well as just GET a response. I’m not addressing PATCH in this cycle, but it is on the roadmap.
  • Parameters - There are only a handful of query parameters present in the primary paths (organizations, locations, services), and a robust set for use in /search. Other than that, everything is mostly defined through path parameters, keeping things cleanly separated between path and query.
  • Headers - I’m only using headers for authentication. I’m also considering using the HTTP Prefer Header for schema filtering, but nothing else currently.
  • Actions - Nothing to do here either, as the API is pretty CRUD at this point, and I’m awaiting more community feedback before I add any more detailed actions beyond what is possible with the default verbs–when relevant I will add guidance to this area of the design.
  • Body - All POST and PUT methods use the body for request transport. There are no other uses of the body across the current design.
  • Pagination - I am just going with what is currently in place as part of v1.0 for the API, which uses page and per_page for handling this.
  • Data Filtering - The parameters for core resources (organizations, locations, and services all have a query parameter for filtering data, and the search path has a set of parameters for filtering data returned in response. Not adding anything new for this version.
  • Schema Filtering - I am taking Irakli Nadareishvili’s advice and going to go with RFC 7240 - Prefer Header for HTTP, and craft some different representations when it comes to filtering the schema is returned.
  • Sorting - There is no sorting currently. I did some research in this area, but not going to make any recommendations until I hear more requests from consumers, and the community.
  • Operation ID - I went with camelCase for all API operation IDs, providing a unique reference to be included in the OpenaPI.
  • Requirements - Going through and making sure all the required fields are reflected in the definitions for the OpenAPI.
  • Status Codes - Currently I’m going to just reflect the 200 HTTP status code. I don’t want to overwhelm folks with this release and I would like to accumulate more resources so I can invest in a proper HTTP status code strategy.
  • Error Responses - Along with the status code work I will define a core set of definitions to be used across a variety of responses and HTTP statuses.
  • Media Types - While not a requirement, I would like to encourage implementors to offer four default media types: application/json, application/xml, text/csv, and text/html.

After being down in the weeds I wanted to step back and just think about some of the common sense aspects of API design:

  • Granularity - I think the API provides a very granular approach to getting at the HSDS schema. If I just want a phone number for a location, and I know its location id I can get it. It’s CRUD, but it’s a very modular CRUD that reflects the schema.
  • Simplicity - I worked hard to keep things as simple as possible, and not running away with adding dimensions to the data, or adding on the complexity of the taxonomy that will come with future iterations and some of the more operational level APIs that are present in the current metadata portion of the schema.
  • Readability - While lengthy, the design of the API is readable and scannable. Maybe I’m biased, but I think the documentation flows, and anyone can read and get an idea of the possibilities with the human *services API.
  • Relationships - There really isn’t much sophistication in the relationships present in the HSDA. Organizations, locations, and services are related, but you need to construct your own paths to navigate these relationships. I intentionally kept the schema flat, as this is a minor release. Hypermedia and other design patterns are being considered for future releases–this is meant to be a basic API to get at the entire HSDS schema.

I have a functioning demo of this v1.1 release, reflecting most of the design decisions I listed above. While not a complete API design list, it provides me with a simple checklist to apply to this iteration of the Human Services Data API (HSDA). Since this design is more specification than actual implementation, this API design checklist can also act as guidance for vendors and practitioners when designing their own APIs beyond the HSDS schema.

Next, I’m going to tackle some of the API management, portal, and other aspects of operating a human services API. I’m looking to push my prototype to be a living blueprint for providers to go from schema and OpenAPI to a fully functioning API with monitoring, documentation, and everything else you will need. The schema and OpenAPI are just the seeds to be used at every step of a human services API life cycle.


Avoid Moving Too Fast For My API Audience

I am stepping back to today and thinking about a pretty long list of API design considerations for the Human Services Data API (HSDA), providing guidance for municipal 211 who are implementing an API. I’m making simple API design decisions from how I define query parameters all the way to hypermedia decisions for the version 2.0 of the HSDA API.

There are a ton of things I want to do with this API design. I really want folks involved with municipal 211 operations to be adopting it, helping ensure their operations are interoperable, and I can help incentivize developers to build some interesting applications. As I think through the laundry list of things I want, I keep coming back to my audience of practitioners, you know the people on the ground with 211 operations that I want to adopt an API way of doing things.

My target audience isn’t steeped in API. They are regular folks trying to get things done on a daily basis. This move from v1.0 to v.1 is incremental. It is not about anything big. Primarily this move was to make sure the API reflected 100% of the surface area of the Human Services Data Specification (HSDS), keeping in sync with the schema’s move from v1.0 to v1.1, and not much more. I need to onboard folks with the concept of HSDS, and what access looks like using the API–I do not have much more bandwidth to do much else.

I want to avoid moving too fast for my API audience. I can see version 2,3, even 4 in my head as the API architect and designer, but am I think of me, or my potential consumers? I’m going to seize this opportunity to educate my target audience about APIs using the road map for the API specification. I have a huge amount of education of 211 operators, as well as the existing vendors who sell them software when it comes to APIs. This stepping back has pulled the reigns in on my API design strategy, encouraging me to keep things as simple as possible for this iteration, and helping me think more thoughtfully about what I will be releasing as part of each incremental, as well as major releases the future will hold.


Studying How Providers Are Supporting Batch API Requests

A recent addition to my API research is the concept of making batch API requests. I was reminded of this during a webinar I did with Cloud Elements when they cited batch API requests as an area needing improvement in their State of API Integration report. I had also recently come across several batch APIs while profiling the Google API stack, so I already had the topic in my notebook, but Cloud Element pushed me to add the topic to my research.

Here are a handful of batch API implementations I am working through, to better understand how providers are approaching the problem:

Facebook Graph API [Google Cloud Storage](https://cloud.google.com/storage/docs/json_api/v1/how-tos/batch Full Contact Zendesk SalesForce Microsoft Office Amazon MailChimp Meetup

As I do, in my approach to API research, I will process the common patterns I come across in each of these implementations, then add as building blocks in my API design research, hopefully providing some details API providers can consider early on in the API lifecycle. I’m not looking to tell people how to deliver batch APIs–I am just looking to shine a light on how the successful APIs are already doing it.

I feel like batch APIs are a response to more APIs, and less direct database access or full data download availability. While modular, simple APIs that do one thing well works in many situations, sometimes you need to move large amounts of resources around and make API requests that do more than just update a single resource or database record. I’ll file this research under API design, but I’ll migrate make a mention of it at the database and other levels, as I identify the variances in how bulk API requests are being made and the solutions they are providing.


Google Spanner Is A Database With An API Core

I saw the news that Google’s Spanner Database is ready for prime time, and I wanted to connect it with a note I took at the Google Analyst Summit a few months back–that gRPC is the heart of the database solution. I’m not intimate with the Spanner architecture, approach, or codebase yet, but the API focus, both gRPC core, and REST APIs for a database platform are very interesting.

My first programming job was in 1987, developing COBOL databases. I’ve watched the database world evolve, contributing to my interest in APIs, and I have to say Google Spanner isn’t something I anticipated. Databases have always been where you start deploying an API, but Spanner feels like something new, where the database and the API are one, and the way the database does everything internally and externally is done via APIs (gRPC).

Now that Spanner Database is ready for prime time, I will invest some more time in standing up an instance of it and get to work playing with what is possible with the REST APIs. I also want to push forward my grPC education by hacking on this side of the database’s interface. Spanner feels like a pretty seismic shift in how we do APIs, and how we do them at scale–when you combine this with the elasticity of the cloud, and the simplicity of RESTful interfaces I think there is a lot of potential.


Communication With Consumers Through The Design Of Our APIs

Many of the problems that APIs are often associated with API adoption can often be mitigated via more communication. I track on a number of ways the successful API providers are communicating around their API efforts, but I also like it when API designers and architects communicate through the actual technical design of their APIs. One example of this can be found in the IETF draft submission for The Sunset HTTP Header, by Erik Wilde.

"This specification defines the Sunset HTTP response header field, which indicates that a URI is likely to become unresponsive at a specified point in the future."

In his original post, nothing lasts Forever (and some things disappear according to a schedule), Erik shows us that a little bit of embedded communication like this in the design of our APIs can help make API consumers more in tune with what is going on. It is tough to get people's attention sometimes, and I think sometimes when us engineers are heads down in the code we tune out the regular channels, and baking things in the request and response structure of the API can help out a lot.

I like the concept of baking API communication and literacy into our operations through good API design. This is one aspect of hypermedia API design that I think often gets overlooked by the haterz. I'll keep adding these little building blocks to my API communications research, helping develop ways we can better communicate with our API consumers through our API design--in real time. 


Simple API Design Interface Features

I am on a quest to help improve and standardize the available API design tooling out there, and one aspect of doing this is spending time highlighting the API service providers who have interesting approaches to design embedded in their service.

Top of my list is Restlet with their studio. There are a couple of things going on in Restlet Studio that I think are significant to the API design conversation, and would like to see become commonplace across service providers, and possibly even part of some sort of open source offering.

Restlet Studio has a nice human interface for designing APIs. When you load up an API you are given a simple, clean, yet comprehensive user interface for adding and updating API paths, and other finer details of your design, no developer skills necessary--allowing anyone to step up and help define the API contract.

Next, when you look at the bottom of your API design in Restlet Studio you are given a design, OpenAPI 2.0, and RAML 1.0 view of your API, allowing you to switch from user interface to a machine-readable definition of the API contract in two leading formats. This API contract will continue to define your API at every stop along the life cycle, and it significantly helps that this contract is human readable in YAML and Markdown. All API design editors should abstract away as much as they possibly can regarding transformations between API definition formats (use the API Transformer API).

I like Restlet's simple approach to delivering design tooling at the center of their platform. I'm keeping an eye on a variety of service providers who are innovating in this way, providing usable tooling when it comes to API design. If you are looking for other examples of this in the wild, take a look at APIMATIC or Stoplight--they are both designing high-quality API design interfaces and tooling. When crafting your own API design user interfaces make sure you take the time to study what is already out there, so that you are building on the best practices are in use, and not just reinventing the wheel when it comes to API design.

Disclosure: I am an advisor to APIMATIC, and Restlet is my partner.


From CRUD To An API Design Conversation With Human Services

I am working to take an existing API, built on top of an evolving data schema, and move forward a common API definition that 211 providers in cities across the country can put to use in their operations. The goal with the Human Services Data Specification (HSDS) API specification is to encourage interoperability between 211 providers, allowing organizations to better deliver healthcare and other human services at the local and regional level.

So far, I have crafted a v1.0 OpenAPI derived from an existing Code for America project called Ohana, as well as a very CRUD (Create, Read, Update, and Delete) version 1.1 OpenAPI, with a working API prototype for use as a reference. I'm at a very important point in the design process with the HSDS API, and the design choices I make will stay with the project for a long, long time. I wanted to take pause and open up a conversation with the community about what is next with the APIs design.

I am opening  up the conversation around some of the usual API design suspects like how we name paths, use headers, and status codes, but I feel like I should be also asking the big questions around use of hypermedia API design patterns, or possibly even GraphQL--a significant portion of the HSDS APIs driving city human services will be data intensive, and maybe GraphQL is one possible path forward. I'm not looking to do hypermedia and GraphQL because they are cool, I want them to serve specific business and organizational objectives.

To stimulate this conversation I've created some Github issues to talk about the usual suspects like versioning, filteringpagination, sortingstatus & error codes, but also opening up threads specifically for hypermedia, and GraphQL, and a thread as a catch-all for other API design considerations. I'm looking to stimulate a conversation around the design of the HSDS API, but also develop some API design content that can help bring some folks up to speed on the decision-making process behind the crafting of an API at this scale.

HSDS isn't just the design for a single API, it is the design for potentially thousands of APIs, so I want to get this as right as I possibly can. Or at least make sure there has been sufficient discussion for this iteration of the API definition. I'll keep blogging about the process as we evolve, and settle in on decisions around each of these API design considerations. I'm hoping to make this a learning experience for myself, as well as all the stakeholders in the HSDS project, but also provide a blueprint for other API providers to consider as they are embarking on their API journey, or maybe just the next major version of its release.


Open Source Drag And Drop API Lifecycle Design Tooling

I'm always on the hunt for new ways to define, design, deploy, and manage API infrastructure, and thought the AWS Cloud Formation Designer provides a nice look at where things might be headed. AWS CloudFormation Designer (Designer) is a graphic tool for creating, viewing, and modifying AWS CloudFormation templates, which translates pretty nicely to managing your API infrastructure as well.

While the AWS Cloud Formation Designer spans all AWS services, all the elements are there for managing all the core stops along the API life cycle liked definition, design, DNS, deployment, management, monitoring, and others. Each of the Amazon services is available with a listing of each element available for the service, complete with all the inputs and outputs as connectors on the icons. Since all the AWS services are APIs, it's basically a drag and drop interface for mapping out how you use these APIs to define, design, deploy and manage your API infrastructure.

Using the design tool you can create templates for governing the deployment and management of API infrastructure by your team, partners, and other customers. This approach to defining the API life cycle is the closest I've seen to what stimulated my API subway map work, which became the subject of my keynotes at APIStrat in Austin, TX. It allows API architects and providers to templatize their approaches to delivering API infrastructure, in a way that is plug and play, and evolvable using the underlying JSON or YAML templates--right alongside the OpenAPI templates, we are crafting for each individual API.

The AWS Cloud Formation Designer is a drag and drop UI for the entire AWS API stack. It is something that could easily be applied to Google's API stack, Microsoft, or any other stack you define--something that could easily be done using APIs.json, developing another layer of templating for which resource types are available in the designer, as well as the formation templates generated by the design tool itself. There should be an open source "API formation designer" available, that could span cloud providers, allowing architects to define which resources are available in their toolbox--that anyone could fork and run in their environment.

I like where AWS is headed with their Cloud Formation Designer. It's another approach to providing full lifecycle tooling for use in the API space. It almost reminds me of Yahoo Pipes for the AWS Cloud, which triggers awkward feels for me. I'm hoping it is a glimpse of what's to come, and someone steps up with an even more attractive drag and drop version, that helps folks work with API-driven infrastructure no matter where it runs--maybe Google will get to work on something. They seem to be real big on supporting infrastructure that runs in any cloud environment. *wink wink*


Getting Feedback From Your API Community When Developing APIs

Establishing a feedback loop with your API community is one of the most valuable aspects of doing APIs, opening up your organization to ideas from outside your firewall. When you are designing new APIs or the next generation of your APIs, make sure you are tapping into the feedback loop you have already created within your community, by providing access to the alpha, beta, and prototype versions of your APIs.

The Oxford Dictionaries API is doing this with their latest additions to their stack of word related APIs, by providing early access for their community with two of their new API prototypes that are currently in development:

  • The Oxford English Dictionary (OED) is the definitive authority on the English language containing the meaning, history, and pronunciation of more than 280,000 entries – past and present – from across the English-speaking world. Its historical record of the English language is traced through more than 3.5 million quotations ranging from classic literature and specialist periodicals to film scripts and cookery books.
  • bab.la offers quick and easy translations and answers to everyday language questions. As part of the Oxford Dictionaries family, it provides practical support to people using a language that is not their mother tongue.

To get access to the new prototypes, all you have to do is fill out a short questionnaire, and they will consider giving you access to the prototype APIs. It is interesting to review the questions they ask developers, which help qualify users but also asks some questions that could potentially impact the design of the API. The Oxford Dictionaries API team is smart to solicit some external feedback from developers before getting too far down the road developing your API and making it available in any production environment.

I do not think all companies, organizations, and government agencies have it in their DNA to design APIs in this way. There are also some concerns when you are doing this in highly competitive environments, but there are also some competitive advantages in doing this regularly, and developing a strong R&D group within your API ecosystem--even if your competitors get a look at things. I'm going to be flagging API providers who approach API development in this way and start developing a list of best practices to consider when it comes to including your API community in the design and development process, and leveraging their feedback loop in this way.


Taking A Look At The Stoplight API Spec Editor

I'm keeping an eye on the different approaches by API service providers when it comes to providing API editors within their services and tooling. While I wish there was an open source GUI API editor out there, the closest thing we have is from these API service providers, and I am trying to track on what the best practices are so that when someone does step up and begin working on an open, embeddable solution, they can learn from my stories about what is working or not working across the space.

One example I think has characteristics that should be emulated is the API Spec Editor from Stoplight. The GUI editor lets you manage all the core elements of an OpenAPI like the general info, host, paths, and even the shared responses and parameters. They even provide what they call a CRUD builder where you paste the JSON schema, and they'll generate the common paths you will need to create, read, update, and delete your resources. Along the way you can also make calls to API endpoints using their interactive interface, helping ensure your API definition is actually in alignment with your API.

The Stoplight API Spec Editor bridges the process of defining your OpenAPI for your operations, with actually documenting and engaging with an API through an interactive client interface. I like this approach to coming at API design from multiple directions. Apiary first taught us that API definitions were about more than just documentation, and I think our API editors should keeping evolving on this concept, and allowing us to engage with any stops along the API life cycle like we are seeing from API service providers like Restlet.

I'm already keeping an eye on Restlet and APIMATIC's approach to providing a GUI API design editor within their solutions and will keep an eye out on other providers as I have time. Like other areas of the API sector, I'm hoping I can develop a list of best practices that any service provider can follow when developing their tools and services.


REST, Linked Data, Hypermedia, GraphQL, and gRPC

I'm endlessly fascinated by APIs and enjoy studying their evolution. One of the challenges in helping evangelize APIs that I come across regularly is the many different views of what is or isn't an API amongst people who are API literate, as well as helping bring APIs into focus for the API newcomers because there are so many possibilities. Out of the two, I'd say that dealing with API dogma is by far a bigger challenge, than explaining APIs to newbies--dogma can be very poisonous to productive conversations and end up working against everyone involved in my opinion. 

I'm enjoying reading about the evolution in the API space when it comes to GraphQL and gRPC. There are a number of very interesting implementations, services, tooling, and implementations emerging in both these areas. However, I do see similar mistakes being made regarding dogmatic behavior, aggressive marketing tactics, and shaming folks for doing things differently, as I've seen with REST, Hypermedia, and linked data efforts. I know folks are passionate about what they are doing, and truly believe their way is right, but I'm concerned you will all suffer from the same deficiencies in adoption I've seen with previous implementations.

I started API Evangelist with the mission of counteracting the aggressive approach of the RESTafarians. I've spent a great deal of time thinking about how I can turn average developers and even business folks on to the concept of APIs--no not just REST or just hypermedia, but web APIs in general. Something that I now feel includes GraphQL and gRPC. I've seen many hardworking folks invest a lot into their APIs, only to have them torn apart by API techbros (TM) who think they've done it wrong--not giving a rats ass regarding the need to actually help someone understand the pros and cons of each approach.

I'm confident that GraphQL will find its place in the API toolbox, and enjoy significant adoption when it comes to data-intensive API implementations. However, I'd say 75% of the posts I read are pitting GraphQL against REST, stating it is a better solution. Period. No mention of its limitations or use cases where it might not be a good idea. Leaving us to only find out about these from the GraphQL haters--playing out the exact same production we've seen over the last five years with REST v Hypermedia. Hypermedia is finding its place in some very useful API implementations like FoxyCart, and AWS API Gateway (to name just a few), but its growth has definitely suffered from this type of storytelling, and I fear that GraphQL will face a similar fate. 

This problem is not a technical challenge. It is a storytelling and communication challenge, bundled with some very narrow incentive models fueled by a male-dominated startup culture, where folks really, really like being right and making others feel bad for not being right. Stop it. You aren't helping your cause. Even if you do get all your techbros thinking like you do, your tactics will fail in the mainstream business world, and you will only give more ammo to your haters, and further confuse your would be consumers, adopters, and practitioners. You will have a lot more success if you are empathetic towards your readers, and produce content that educates, and empowers, over shames and tearing down.

I'm writing this because I want my readers to understand the benefits of GraphQL, and I don't want gRPC evangelists to make the same mistake. It has taken waaaay too long for linked data efforts to recover, and before you say it isn't a thing, it has made a significant comeback in SEO circles, because of Google's adoption of JSON-LD, and a handful of SEO evangelists spreading the gospel in a friendly and accessible way--not because of linked data people (they tend to be dicks in my experience). As I've said before, we should be investing in a robust API toolbox, and we should be helping people understand that benefits of different approaches, and learn about the successful implementations. Please learn from others mistakes in the sector, and help see meaningful growth across all viable approaches to doing API--thanks.


A Looser More Evolvable API Contract With Hypermedia

I wrote about how gRPC API implements deliver a tighter API contract, but I wanted to also explore more thought from that same conversation, about how hypermedia APIs can help deliver a more evolvable API contract. The conversation where these thoughts were born was focused on the differences between REST and gRPC, in which hypermedia and GraphQL also came up. Leaving me thinking about how our API design and deployment decisions can impact the API contract we are putting forth to our consumers.

In contrast to gRPC, going with a hypermedia design for your API, your client relationship can change and evolve, providing an environment for things to flex and change. Some APIs, especially internal API, and trusted partners might be better suited for gRPC performance, but when you need to manage volatility and change across a distributed client base, hypermedia might be a better approach. I'm not advocating one over the other, I am just trying to understand the different types of API contracts brought to the table with each approach, so I can better articulate in my storytelling.

I'd say that hypermedia and gRPC approaches give API providers a different type of control over API clients that are consuming resources. gRPC enables dictating a high performance tighter coupling by generating clients, and hypermedia allows for shifts in what resources are available, what schema are being applied, and changes that might occur with each version, potentially without changes to the client. The API contract can evolve (within logical boundaries), without autogeneration of clients, and interference at this level.

As I learn about this stuff and consider the API contract implications, I feel like hypermedia helps API provides navigation change, evolve and shift to deliver resources to a more unknown, and distributed client base. gRPC seems like it provides a better contract for use in your known, trusted, and higher performance environments. Next, I will be diving into what API discovery looks like in a gRPC world, and coming back to this comparison with hypermedia, and delivering a contract. I am feeling like API discovery is another area that hypermedia will make sense, further helping API providers and API consumers conduct business in a fast changing environment.


A Tighter API Contract With gRPC

I was learning more about gRPC from the Google team last week, while at the Google Community Summit, as well as the API Craft SF Meetup. I'm still learning about gRPC, and how it contributes to the API conversation, so I am trying to share what I learn as I go, keeping a record for others to learn from along the way. One thing I wanted to better understand was something I kept hearing regarding gRPC delivering more of a tighter API contract between API provider and consumer.

In contrast to more RESTful APIs, a gRPC client has to be generated by the provider. First, you define a service in a .proto file (aka Protocol Buffer), then you generate client code using the protocol buffer compiler. Where client SDKs are up for debate in the world of RESTful APIs, and client generation might even be frowned upon in some circles, when it comes to gRPC APIs, client generation is a requirement--dictating a much tighter coupling and contract, between API provider and consumer. 

I do not have the first-hand experience with this process yet, I am just learning from my discussions last week, and trying to understand how gRPC is different from the way we've been doing APIs using a RESTful approach. So far it seems like you might want to consider gRPC if you are looking for significant levels of performance from your APIs, in situations where you have a tighter relationship with your consumers, such as internal, or partner scenarios. gRPC requires a tighter API contract between provider and consumer, something that might not always be possible, depending on the situation. 

While I'm still getting up to speed, it seems to me that the .proto file, or the protocol buffer definition acts as the language for this API contract. Similar to how OpenAPI is quickly becoming a contract for more RESTful APIs, although it is often times much looser contract. I'll keep investing time into learning about gRP, but. I wanted to make sure and process what I've learned leading up to, and while at Google this last week. I'm not convinced yet that gRPC is the future of APIs, but I am getting more convinced that it is another important tool in our API toolbox.


Focus On Having A Robust And Diverse API Toolbox

I'm learning a lot about HTTP/2 and gRPC this week, so I have been thinking about how we isolate ourselves by focusing in on individual toolsets, where we should really be expanding our horizons, helping ensure we have the most robust and diverse API toolbox we possibly can. Depending on what part of the tech universe an engineer comes from they'll have a different view on just exactly what I mean when I say API to them.

The most common perspective that people respond with is REST. Folks automatically think this is what I mean when I say API--it isn't. That is your dogma or your gullibility to other people's dogma--please expand your horizons. When I say APIs, I mean an application programming interface that leverages web technology (aka HTTP, HTTP/2). Sure REST is the dominating philosophy in this discussion, but I see no benefit in limiting ourselves in this fashion--I see API as a toolbox, with the following toolsets:

  • REST - Considering Roy Fielding's guidance when it comes to leveraging HTTP in my API design.
  • Hypermedia - Considering the web, and the role that links can play in your API design.
  • Webhooks - Helping keep APIs a two-way street, expanding your API to respond to events not just single requests.
  • GraphQL - Helping put powerful query tooling in the hands of API consumers when working with large and complex datasets.
  • gRPC - Acknowledging that the HTTP protocol is evolving, and our API contract can evolve along with the specification.

This represents the major toolsets that are currently in my API toolbox. Sure there are a number of other individual tools and specifications in there as well, but this represents the top tier toolsets I am looking at when it comes to getting things done. In my opinion, there is a time and place for each of these toolsets to be put to use. I can easily geek out on any of these areas, but because I'm not selling you a product or service, I'm more interested in finding the right tool for the situation.

I do not expect that all providers have this robust of API toolbox from day one--it is something that will come with time. My recommendation is everyone begins with REST, developing proficiency in this domain, while you are also learning about hypermedia webhooks. As your understanding evolves, then you should start looking at GraphQL, gRPC, and other more advanced approaches to round off your skills. Do not let your religious technological beliefs or any single vendor motivations limit the scope of your API toolbox, you should be working to make it as robust and diverse as you possibly can.


I Am Learning About gRPC APIs From Google

I have been processing Google's API design guide, and an unexpected part of the work has been learning more about gRPC, which Google is "converging designs of socket-based RPC APIs with HTTP-based REST APIs." -- something I have not seen in an API design guide until now. "gRPC uses protocol buffers as the Interface Definition Language (IDL) for describing both the service interface and the structure of the payload messages", and is something I'm hearing more chatter about from larger providers, which I think represents the evolving world of API design beyond the old REST days.

According to the site, "gRPC is used in last mile of computing in mobile and web client since it can generate libraries for iOS and Android and uses standards based HTTP/2 as transport allowing it to easily traverse proxies and firewalls. There is also work underway to develop a JS library for use in browsers. Beyond that, it is ideal as a microservices interconnect, not just because the core protocol is very efficient but also because the framework has pluggable authentication, load balancing etc. Google itself is also transitioning to use it to connect microservices."

To help me learn anything I need an example to reverse engineer, showing me how things work--so here are the examples they provide from the site:

I'm going to add a research area for gRPC, similar to hypermedia, as well as GraphQL. It helps me better keep track of the news about each approach to crafting APIs, and a single place I can go to reference service providers, and working examples. I've had gRPC / Protocol Buffers on my monitoring list for some time now, but seeing Google invest so heavily in this area gives me a signal that I should be paying attention to gRPC more, and gathering more examples that I can share with my readers.

I'm thinking that I will assemble some sort of toolbox area in my API design research, helping folks understand when a more RESTful approach makes sense, as well as when hypermedia, GraphQL, or maybe a gRPC approach might be more appropriate. Next, I'm going to dive into BigTable, PubSub, and Speech APIs to see gRPC in action--something I'm hoping that will help me better understand Google's approach to API design, which something I'm struggling with grasping completely.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.