The Knowledge Sharing Push
We are now seeing a global push towards knowledge sharing. This effort is prompted by many factors such as fostering innovation, getting product to market quicker, and improving organizational efficiency.
It would seem logical to apply something in the order of quantum mechanics to help solve the knowledge sharing conundrum. While quantum mechanics can help to explain the motion and interaction of atoms and subatomic particles, the same cannot be said for knowledge sharing solutions because of the human factor (unpredictability).
In theory a knowledge sharing structure should tap all of the information held within an organization to form a cohesive knowledge sharing framework. However, not everyone in an organization is willing to “give-up” their expertise to others because as the old saying goes: “Knowledge is Power and Power is Knowledge.” As a result, fiefdoms are often created and “only enough” information is given when asked. Thus, we should use the term Selective Knowledge Sharing (SKS) because it seem to be closer to the mark. It also stands to reason that the more transparent the knowledge sharing framework, the greater the chances of reaching organizational goals and objectives.
The Dilemma – What Standard or Standards to Support
The key question “what standard or standards should we support as an organization?” is a challenge that many CIO/CTOs currently face on both the domestic and international fronts. Case in point: When deploying a wireless trading application in Hong Kong for a large North American bank many internal and well as external issues arose. Not only was the ability to support Mandarin and Cantonese a primary concern, but also how to deal with data exchange with local carriers. This complex deployment exposed the various challenges of not only dealing with internal (North America to Asia) data exchange, but also interfacing with local business partners. Consequently, this dilemma is one many organizations currently face – which data interchange standard or standards to support.
The Possible Options
As the experts in this study have exposed, JSON and XML offer a multitude of benefits, so supporting both could ultimately allow for greater flexibility across the global data stage. Not to be forgotten is the vision of the Semantic Web and model for data interchange – Resource Description Framework (RDF). It seems that the market is still somewhat immature, so organizations need to be highly flexible and closely investigate which option or options best meets their short-, medium-, and long-term goals and objectives. As this point in time the debate seems to revolve around JSON and XML regarding which is best for internal and external facing knowledge sharing applications. Below are the two prime areas of interest:
Internal Facing Systems – What do we have currently in place for internal data interchange?
External Facing Systems – What data interchange structure is currently in place to deal with our business partners, and what technologies are they building to?
1) A collection of name/value pairs. In various languages, this is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array.
2) An ordered list of values. In most languages, this is realized as an array, vector, list, or sequence.
|XML||Extensible Markup Language is a non-proprietary subset of SGML. It is focused on data structure and uses tags to specify the content of the data elements in a document, while XML schemas are used to define and document XML applications. Web services are components that reside on the Internet and have been designed to be published, discovered and invoked dynamically across various platforms and unlike networks. The methods, which reside in a specific Web service, may use Simple Object Access Protocol (SOAP) to send or receive XML data. Thus, XML eliminates laborious steps involved with creating a remote object. An example of traditional (centralized application development) and Web services (distributed network-centric applications based on XML) models includes CORBA, which requires that the methods written in the language supporting CORBA be converted to interface definition language (IDL) before being used. Another example is calling methods without using the IDL because a programming language needs only be able to make a call across the Internet using HTTP and then handle an XML response.|
|RDF||Resource Description Framework is metadata model for describing objects and the relationships among them. RDF has features that facilitate data merging even if the underlying schemas differ, and it specifically supports the evolution of schemas over time without requiring all the data consumers to be changed. It extends the linking structure of the Web to use URIs to name the relationship between things as well as the two ends of the link (this is usually referred to as a “triple”). Using this simplistic model, it allows structured and semi-structured data to be mixed, exposed, and shared across different applications. This linking structure forms a directed, labeled graph, where the edges represent the named link between two resources, represented by the graph nodes.|
Views from the Experts
The primary goal of this study is to expose both the positives and negatives of each open-standard. Accordingly, below are different views from various well-respected experts:
JSON seems to be getting a lot of developer attention, yet as a longtime supporter of XML in standards, I think we need to be aware of it and remain flexible. It may be in our interest to consider providing JSON or JSON-LD (Linked Data) versions of standards originally specified as XML schema to make it easier for developers to implement these standards. Standards are only valuable if they are implemented. Likewise, we need to consider the usefulness of RDF representations, which can be mapped from JSON-LD, enhancing the value of an original XML schema-based standard.
~Rex Brooks, CEO-President at Starbourne Communication
Bottom line IMHO is that JSON trumps XML because of its flexibility and inherent ability to “be” unstructured, which I’m finding more and more important these days with the way people want to collect, assemble and consume data (particularly big data). While you can allow XML to deal with unstructured data, if you rely on your schemas (as you should if you’re using XML) you’ll be forced to continually update your schema. Then there’s SOAP for data transfer and the multitude of XML parsers to deal with. For JSON, it’s schema-less, but not un-schema-able. You can provide a schema, if you want to, with every object. In this sense, it is self-schema-able.
~Charles Assaf, Software Architect
One would think that a mark-up copy operation is straight forward. For XML models with an inner “Core” (<StrategicPlanCore>) like StratML – there is only one copy operation view.
Not so for HTML where, thanks to Web Browser rendering, HTML behaves like a Central Processing Unit, recalculating ordered list display indices automatically according to the OL:Type Attribute.
Imagining that both you and a Cloud dweller both have access to a Browser, the same HTML page ought to produce identical indices for identical ordered list types. Of course other style features can vary widely. While this muddies the pattern recognition waters somewhat, it should not have any persistent (e.g. non-reversible) effect on the data no matter how far apart the Browser Screens or Browser brand (any Browser using the same Industry Standards).
XML 1) a list of ingredient names and 2) fractional composition (=100%).
JSON 1) a list of ingredient names.
~Gannon Dick, Software Developer
Whilst being more verbose, XML offers many other advantages. For example, XML schemas allow one to describe, extend, communicate and validate XML datasets. XSLT allows for easy transformation of XML from one format into another, and XPath/XFormat engines allow for deep querying of native XML files. It is this added maturity which makes XML better for communicating (and storing) data between applications.
Of course, with a little clever programming, or a handful of increasingly available third party tools, either standard can be used in most circumstances, leaving us spoiled for choice. Given the increasing prevalence of open architectures and cloud based software and data ecosystems, it seems likely that organizations will have little choice but to embrace both.
~Chris Fox, Founder & CEO: StrategicLearningApp.com
All too often, questions are artificially couched in terms of either/or when both/and might be a better answer. The simple fact that many developers favor JSON is a good enough reason for them to use it to demonstrate to others what can be done with it. However, business-quality records require sufficient structure to support the purposes for which they exist. The appropriate tools should be appropriately applied for the appropriate purposes.
Data without context is meaningless and, worse, can be easily misused for nefarious purposes. Now that a schema specification is being developed for JSON, perhaps it may become a worthy competitor to XML for business-quality records. That’s a matter of maturity, in the sense of the Capability Maturity Model (CMM).
The business requirements should dictate the technology. Over time, as business managers come to better understand the technology, the business requirements will prevail. The question is how long it will take. In the meantime, any machine-readable rendition of information can be transformed into any other with relatively little difficulty – as long as both the semantic as well as the structure of the data are clearly specified.
~Owen Ambur, Chair StratML Committee
The debate between JSON and XML is generally overstated for a wide variety of use-cases. In most situations, which simply involve passing data around, it doesn’t matter which format we use. The best choice is far more a function of what application is designed, what the comfort level is of the developer, and preexisting systems in place. JSON has the perception of being lighter-weight with quicker response times, but the reality is that it in many real-world scenarios, XML might function as quickly anyway.
That said, when it comes to rich data that is actively parsed, transformed, or reshaped, XML is the go-to standard. XML is a full-featured language, which has its own rich set of features, standards, and applications framed around it, useful for querying, transformation, validation and more. The ecosystem around XML is not just focused on passing data back-and-forth in a lightweight manner, but on extending the data into new territories for maximum and even creative data utilization.
~Umesh Thota, Founder of AuthBase.net & Ranjeeth Thunga, Founder of PerspectiveMapper.com
Ironically, the goal of this short study was not meant to show which open-standard is best, but to take the debate to a higher level. As the experts from around the world have exposed, there are many important factors (e.g., deploying, communicating, storing, etc.) that should be taken into consideration. Regarding data interchange, open-standards that include JSON, XML, and RDF, all have a number of positive attributes along with various limitations. For those reasons, organizations should be both flexible and creative in deploying the next generation of knowledge sharing solutions.
It may seem rudimentary, but the larger the organization, the more fiefdoms seem to exist. As a result overlap often occurs, which means that resources are not being utilized in the most efficient way possible. This all too common scenario also raises the key question: Do we as an organization support too many technologies in our current portfolio, and should we consider supporting another open-standard?
Organizations should implement well-thought-out strategic plans and knowledge sharing frameworks that leverage open-standards (e.g., JSON, XML, RDF, etc.). This type of progressive and rational thinking will help to usher in the new era, one which promotes a more agnostic and human-centric World-Wide Web.