In our last post we looked at different general patterns of standards compliance in Open Network solutions. In this post we drill down another layer to look at interoperability at the Application Program Interface (API) level, which creates issues at a level beyond standards.
As we’ve mentioned previously, network equipment has been focused on interface compatibility and interoperability for many decades and has a history of real interoperability success. Traditional networks exposed communications interfaces and most of the standards for network equipment focus on these interfaces.
But with the advent of network software equivalents to hardware devices, we open up new areas for problems.
Software components may implement the same types of communications interfaces, but also will provide Application Program Interfaces (API’s) for interaction between itself and other software components. These API’s may be the subject of standards, and thus the issues raised in previous article may apply. Or they may be simply proprietary API’s, unique to the vendor.
So we need to take a look at how API’s can support interoperability and also the problems that occur in API implementation that make interoperability more challenging.
There are a number of levels at which API’s are open and potentially interoperable, or not.
- Availability of the specification and support by the vendor of third-party implementation (standard or proprietary)
- Level of compliance with any documentation (standardised or not)
- Ability of the underlying components to satisfy the exposed API
Previously, we covered the different degrees of compliance and the obstacles that this put in the way of successful Open Network solutions. In this post we’ll elaborate on the other two only.
Availability of the Interface Specification
Open Standards specifications are generally available, but often not freely available. Some organisations restrict specifications to varying levels of membership of their organisation. Sometimes only paid members can access the specifications.
Proprietary interfaces may be available under certain limited conditions or they may not be available at all. Availability is usually higher for de facto standards, because it enables the standards owner to exert some influence over the marketplace. Highly proprietary interfaces often have higher hurdles to obtain access, typically only if an actual customer requests the specification for itself or on behalf of a solution integrator.
Practical Accessibility in a Project
It’s one thing to get access to an API specification document, but its very much another to gain practical accessibility to the information necessary to implement an interface to that API.
An Open Network solution may have hundreds of API’s in its inventory of components, or more. These API’s must be available for use by the solution designers. A typical solution is to publish these API’s in a searchable catalog. This might be ‘open’ in one sense, but not necessarily Interoperable.
Solution integrators must also have access to a support resource to help with issues arising from the implementation (bugs, etc). It is far too common for the API document to be of limited detail, inaccurate, and even out-of-date. The richness of this support resource and the availability of live support specialists will directly translate to implementation productivity.
Ability of the Underlying Components to Satisfy the API
Software has a number of successes at implementing syntactic and representational openness but not semantic openness. Using the REST standard as an example, I can post a correctly formatted and encoded payload to a REST endpoint, but unless the receiving application understands the semantic content then the interface doesn’t work.
And if the underlying components cannot service the request in a common (let alone standard) way, theoretical interoperability becomes difficult and/or constrained.
An NFV example may help.
Consider an NFV Orchestration use case that performs auto-scaling of NFV instances based on some measure of throughput against capacity. Most NFV components make it easy to obtain the required measures of the relevant metric via telemetry.
But it is the range of available metrics and the algorithms used to generate the metrics that introduces complexity and potentially impacts Interoperability.
One NFV vendor might provide this measure in terms of CPU utilisation at a total NFV level. Another might provide the CPU utilisation at a VM level. Or vendors may use different algorithms for calculating the metric that they call “CPU Utilisation” or may vary considerably in the timing of updates. Another vendor might not provide CPU utilisation at all but may provide a metric of packets per second.
API’s play a significant role in the implementation of Open Network solutions and the achievement of interoperability. However, they are not a “silver bullet” and there can be many challenges. As with Standards compliance, API availability, and potentially compliance with a standard, cannot be assumed.
In the last few posts we’ve focused on software-related topics, but it’s time to bring back the Networking side of Open Networking for our last two posts. Leaving technology aside for the moment, how does a solution integrator deal with the different paradigms for solution implementation that can exist in an Open Networking project? We’ll cover that in the next post.
The post Real-World Open Networking. Part 4 – Interoperability: Problems with API’s appeared first on Aptira.