Cloud-Based Architecture & Integration: Notes From The Experts At Kitepipe

Cloud-Based architecture and Integration - Notes from the front lines
 
I’ve had the privilege of being on the front lines of the move from on-premise based to cloud-based enterprise application architectures.  For the last five years I’ve run Kitepipe, a cloud integration services firm, and participated in a wide range of projects involving the move of IT infrastructure, applications, and business functions to the cloud.
 
In the following series I’ll share my experiences and observations on this fundamental shift in application architecture.  I’ll focus on the integration implications, as that is my practice area.  Kitepipe gets involved when the integration implications of moving a business process from an on-prem location to a cloud application become evident.  For some this is in the planning/sales process.  Sometimes this is during the implementation process.  And quite often integration needs only become evident after the move, when the pain/opportunity equation of implementing integration becomes evident.
 

Cloud Applications vs. Cloud Infrastructure

Lets first distinguish between two types of migration to the cloud - Applications in the cloud, and infrastructure in the cloud.  
 
From Amazon, you can rent servers in their cloud.  These are large physical servers in a server farm operated by Amazon, which are subdivided into virtual servers that can be created at will.  These virtual machines (vms) function just like a server on your site.  You can get them running windows, or a range of Linux variants, with a wide range of resource specs in terms of RAM, storage, and network bandwidth.  Firms other than Amazon Web Services (AWS) provide this service, but AWS is the largest and best known.
 
Your company rents a set of these from AWS, and moves jobs, databases, and applications from on-site servers in your data center to these AWS servers.  Once network connectivity is established, things run just the same as they did on-premise except that you no longer own and operate the hardware.  This is infrastructure in the cloud, or IaaS (Infrastructure as a Service).  Its a service because you don’t own it, but you rent it “as a service”. 
 
But, the software applications that run on it are yours, or purchased/licensed by you.  That part hasn’t changed. You may still be running your licensed version of SAP, but now the hardware it is running on is owned by Amazon, and rented by you.  And the application architecture, and integration architecture, is still the same.  Once network connectivity is in place, however you were getting data in and out of those applications has not changed.
 
In contrast, the rise of cloud applications, or SaaS (Software as a Service) has changed the architecture and integration landscape.    In SaaS, you are no longer running “your” application, code, or database, but renting a seat who can use the vendor’s fully managed application to execute a specific business process for your company.  The biggest and best known SaaS vendor is of course, Salesforce.  They started with an application focused on a very common use case - that of managing the sales process and its associated data.  This application had two important innovations:
  • Multi-tennancy - all data for all users is in the same set of data base tables
  • Sold as a service - the customer rents the use of the application per month per user
 
These two innovations, working together, completely transformed the global $400B enterprise software industry (Gartner, 2107).  I’ve written in detail about the hows and whys of this transformation, here and here.  The combination of dramatic cost reduction, dramatic feature increase, and removal of buying and implementation barriers made SaaS applications the overwhelming choice for new implementations, and increasingly for replacement implementations of business application software.
 
 

SaaS: How Cloud Applications are Dominating 

In contrast to IaaS, SaaS has changed the architecture and integration landscape, in several important ways.  The first way is obvious - you are running your business process on an application and data structure that you don’t own (you just rent it) and you typically can’t access the data structures directly.  Thus the rise of the API (application programming interface), which gives you a programmatic interface (structured calls and returns) to your data base.  This is different from on-premise applications, where you typically had access to the data structures, and made data base queries to get bulk data that you moved to another data base.
 
The other impact of SaaS is more subtle - the rise of “Best of Breed” applications, and the fragmenting of the application software space into sub and sub-sub domains.  For a number of reasons, including ease of penetrating the market, SaaS applications have focused on a specific business function for a specific market segment.  This has lead to an explosion of SaaS applications on the market.  No one knows how many, but informed estimates put the number at well over 10,000.  No longer do you buy an HR application, you license use of a recruiting and on boarding application for tech companies (such as Jobvite - a bit of googling found a SaaS solution in this randomly chosen segment). 
 
this explosion means that an enterprise no longer has a dozen applications on-premise, but now has fifty applications in the cloud - each one the best of breed for a specific business process (on-boarding, lease accounting, community management, etc).   The compelling economics of best-of-breed SaaS applications means that a department can buy their own solution, without central IT support.  In fact, some of our fast growing tech customers have no central IT function in the traditional sense, but each functional group sources their own applications.
 
This proliferation of cloud applications blows away traditional notions of IT architecture and master data management.  The cows are out of the barn, IT can just chase them around the pasture.  How do you define and manage an overall information architecture when your departments and business units acquire and set up their own applications?
 
As the application landscape fragments and moves to the cloud, so the business process data created by customers using these applications moves to the cloud as well.  This is everything from a job applicant’s address to the pricing of a widget you just sold.  This data is now all over the place, in 50 different cloud data bases that you can only access through an API, or programming interface.  But, the HR Benefits application needs the addresses from the recruiting application.  And the billing application needs the pricing from the sales application.  How is all this valuable business data, spread across fifty application platforms, going to move from one to another to enable seamless business processes?
 
Well, you need integration tools.  You need structured means of moving business process data from one cloud application to another.  
 

Integrating Cloud-Fragmented Applications for Business Optimization

 
Enter IPaaS, or Integration Platform as a Service.  About 10 years ago, a very smart guy from Philadelphia, Rick Nucci, saw that all of the above would come to pass, and that there would be a need for integration tools to connect this coming blizzard of cloud applications.  So he bet the farm on converting his on-premise EDI software company, Boomi, to the first cloud-focused integration platform, or a segment that would become IPaaS.  The Boomi integration was the first, and remains the best set of integration tools for the new “cloud-fragmented” application architecture.  Boomi has the important features needed to be a complete solution to the new cloud application architecture, including:
  • Huge library of Connectors that talk to the Application APIs
  • High performance ATOM runtime engine on premise, or use the Boomi ATOM in the cloud
  • Codeless, hyper-productive builder tools that allow you to build and change integrations quickly
  • Full SDLC support including build, deploy and monitoring tools
  • Expanded product line includes API management, MDM, Queues and more
 
We will talk more about how this most mature Boomi cloud application tool set fits the needs of the cloud integration specialist.
 
So, there is a need for enterprises to integrate this new cloud-fragmented application space.  But what do these integration architectures look like?  In the past there have been a series of integration approaches that moved in and out of favor - including enterprise data models, enterprise data warehouse, SOA, and enterprise service bus.  What is the emerging integration architecture in the cloud-fragmented application space?  And who would be knowledgable about this emerging IT practice area?
 

Integration That Aids The Business Process, Not Just Data Sync

Enter Kitepipe, a Cloud Integration Services firm, specializing in building integration processes in the emerging cloud-fragmented application space.  Kitepipe’s founder, Larry Cone (your humble author) is another smart Philly guy.  I was VP Engineering at an early cloud application company near Philadelphia.  So early that we didn’t call it Cloud, or even SaaS, but an ASP, Application Service Provider.  We needed an integration tool to connect to our newly-installed Netsuite instance, and I selected Boomi as our integration tool.  I knew about Boomi and Rick through the local IT ecosystem, and they were a natural choice.  I took the training, and did some hands-on building with the Boomi tools, and I really liked the product.
 
My insight was a bit later.  I too saw the coming fragmentation of the application space into the cloud, and wrote about it here and here.  I too saw the coming need for integration between these cloud based applications.  I too saw that Boomi was a great toolset, purpose-built for this role.  I started a cloud integration services company, Kitepipe, to address this coming market.  Kitepipe is today a leader in implementing and managing Boomi integration suites for companies large and small.
 
One Additional Insight
 
And I had one additional insight, after working at integration for a while, which is this:  That the value to the business comes from thinking about integration at the business process level, rather than the data level.  So, don’t think about moving closed-won opportunities from Salesforce to Netsuite - think about how the integration process can complete the sales-fulfillment-billing cycle across multiple applications.  Understand the operational challenges, and build an integration process that solves those challenges.
 
But what does the integration architecture look like?  At last, we are getting to the core of this series of posts.  Lets take that sales process as an example, deconstruct it, and look at the architecture that results from implementing using best practices.
 
Often, the best way to proceed is to understand the set of problems to be solved.  I call this "selling to pain” - its the approach I use in the sales process to convey the value of the proposed integration project.  so, here are some typical pain points in the sales to accounting integration:
  • different field and data structures
  • different code values for the same data
  • wrong customer
  • wrong/bad pricing
  • special or promotional pricing
  • wrong territory or geography
  • wrong currency
  • related deal components - not SOX compliant
  • all happens at the end of the quarter
  • not well forecasted
  • new products/offerings not yet in accounting
 
The number one role of the integration is to transform fields and codes from sales format to accounting format, but that is just the start.  Most of the above issues can be characterized as “bad data” - data from Sales that does not fit (yet) into Accounting.  An effective integration approach needs to start with the understanding that most integration problems are data problems.  Based on this, we at Kitepipe have developed a template for building highly functional integrations.  
 

Rich Cloud Integration for Better Access to Data 

So the number one role of the integration access is access - the process needs to be able to selectively query the data in the right state; the next role is Transform - map source data to target data profiles; The number three role is Enrich - look up or add master data to the transaction, as needed; the next role is Validation - is all the date good? the fifth role is alerting - tell someone who can fix the data.  The sixth role is to Post the transaction to the target application, and detect any errors;  The seventh role is to Synchronize - make sure that there is a key pair between the application platforms that can be used to match the transactions for future updates.  The next role is Recovery - the integration process should be recoverable in case of failure, ideally by just running it again.  The last role is Status - make sure users on either side know what happened, and what the status of the transaction(s) are.
 
So thats:
  • Access - select the data you want
  • Transformation - map source to target
  • Enrich - include data from other sources to enrich the transaction
  • Validation - insure that the data is valid
  • Alerts - notify those who can fix the bad data
  • Post - post the new or updated transaction to the target system(s), and detect any errors
  • Synchronization - post keys to target, and back to the source to confirm traceability
  • Recovery - structure the process for recovery, ideally by rerunning, and without intervention
  • Status - post status to logs, people, source and target platforms
 
This set of features is built in to our process templates at Kitepipe, and speeds the integration development process.  Because Boomi is a powerful purpose-built platform for building integrations, all of the above functionality can be quickly built, tested and deployed, without writing code.
 
These functions of the integration are architecture at the process level - a Boomi process can (and should) accomplish all of these functions for a given set of transactions.  There are additional choices to make which impact how the processes executed - batch vs. real time, for instance - but in Boomi the integration structure and logic is the same, the only difference is how the process is invoked.  You can select a batch of records from the sales system and process them, or deploy a “listener” or web service - that accepts calls from a trigger in the Sales system to process sales records on an event basis.  But in Boomi the process structure is the same.
 
This represents a ‘Point to Point” integration solution - from the sales application to the accounting application.  Many of the integrations we build at Kitepipe - and we build a lot of integrations - are of this type. - but surely this is not an enterprise integration architecture?  Well, yes it is - a point to point integration solves a business process problem quickly and effectively.
 
But what about my complex enterprise where we have possibly fifty application endpoints, and and the possible point to point integrations is in the hundreds - in fact, Siri tells me that there are 1,250 unique pairs of integration points of 50 applications - twice that if direction is counted separately.  This rapidly becomes unmanageable, and a different approach is needed.  Or is it…
 
Having looked at long lists of application platforms in a single enterprise, I can confirm that the vast majority of application pairs, taken randomly, don’t make sense, and would not be implemented.  So, you don’t need to manage 1200 integrations.  Our most complex customers have implemented 30 or 40 integrations, which is manageable by a small, well-trained integration team.  But not all of those are point to point - we use some integration hub techniques to manage complexity in complex enterprises.
 

Evolution of Cloud Integration Architecture 

Probably the best way to think of integration architecture is based on evolution - what do you do first, and next, and what might the final state look like?  This is a good approach to take, because it allows quick wins on the way to a more sustainable architecture.
 
Ah for the (bad) old days, when a corporate IT shop could embark on 24 or 36 month or 5 year re-architecture or master data implementation projects, which got repeatedly delayed, then quietly cancelled.  No one has that kind of time or money anymore.  We at Kitepipe need to sing for our supper every night, so we better be putting something into production that improves the client's business every six to eight weeks.
 
So, how do you implement a complex cloud-fragmented application integration architecture?  As the old riddle asks, how do you eat the Elephant?  One spoonful at a time.
 
Here is our successful recipe for building a comprehensive cloud integration architecture for the enterprise.

 Recipe for Cloud Integration Success

 
Phase one: Build Rich point to point transaction integrations for the revenue stream
 
The revenue stream is where you can have the most impact on most businesses.  Speed it up just a little bit, and you pay for the whole party, and make important people happy (like the CFO).  Plus the metrics are in place - people will notice if you cut processing time from sale to fulfillment, or reduce the errors between sale and revenue recognition.
 
This looks different for different businesses - sometimes its customer acquisition, sometimes order processing or rev rec, sometimes fulfillment, sometimes exception handling (returns, back orders, RMAs, etc).  But applying integration tools to the pain points in the revenue cycle is always going to make your client a hero!
 
As a step one A, build additional point to point integrations that support other critical business functions.  In some of our high growth customers these are HR or benefits integrations
 
Phase Two: Build master data hubs for key Master Data objects
 
As applications proliferate in the cloud, many or most of them need some common master data.  Most larger enterprises have the most trouble with customer data.  Multiple product lines, sales teams, and divisions have different customer sets, and these are never reconciled.  You can often make a revenue case around cross-selling, if only there were a clean, common customer master. This is often a regulatory requirement, or at least a source of embarrassment.   One of our projects was funded on the basis that no-one in the organization could agree on how many customers they had.  So a customer master data hub is a pain/opportunity point for many organizations.
 
However, it is often a complicated problem to solve, with many stake holders (Sales, Accounting, CRM).  For many organizations, we find that and Employee or Worker Master Data Hub is the better target.  There are fewer stakeholders (the HR or HRM team and IT teams, mostly), quicker wins (simpler problems that have just not been addressed), and visible results (everyone in the organization can be positively impacted).
 
So we often recommend an employee or worker master data hub be the next phase.  The benefits can include:
 - recruiting and on boarding support for the fast growing organization
 - provisioning of users across multiple/many applications, cloud or on prem
 - support for regulatory / SOX compliance reporting on information access
 
Its a problem every company has at a greater or lesser level, has a regulatory driver, and supports a typically underserved team, the HR team.  Scope is manageable using a Kitepipe templated Boomi MDM solution, and the results are visible to the whole organization.
 
Based on that success, you may want to take on the typically more difficult Customer master data hub.
 

Boomi Master Data Hub is Different 

As an aside, a Boomi MDM solution is much different that the prior generation on prem complex Master Data Management approaches.  It is light weight, and functions solely in support of integration.  That means that the Boomi MDM repository is not a central place to manage and update master data.  It is a way of centralizing the movement of master data across the application space using a master data hub, that keeps an up to date copy of the data that must be shared between applications.
 
Data is still managed and updated in a source application (like Workday, for worker data), but the data that must be shared is passed thru a MDM hub, that manages priority, conflicts, and distribution of data to all interested applications.  Its a way of simplifying the many point to points into a hub and spoke architecture.
 
Phase Three: Build Transactional Data Hubs
 
A transactional data hub is like a master data hub, in that a point-to-point map is simplified into a hub and spoke map.  An example is a Revenue data hub, where many systems might contribute transactions with revenue impact to a set of finance or fulfillment applications.  These hubs are typically not persistent data stores, unlike the master data hubs described above.  Instead, they are ways of simplifying a complex set of point to point integrations.  A transactional data hub might have these features;
 
 - Access transactions, often from one or more queues
 - Map to a standard, or canonical transaction format
 - Enrich the transaction with master data, from applications or master data hubs
 - Validate the transaction vs business rules
 - Log the transaction status and validation results
 - Store the transaction in an outbound pub-sub queue to be read by interested applications
 
For our clients with very complex, global application landscapes, a Transactional Data Hub can be the most effective architecture.  This architecture is a conceptual approach, and is built using the Boomi toolset, Atmosphere processes, MDH, queues, and API management web services.
 
But, our recommendation is always to take it in phases.  Starting with a transactional data hub architecture increases the time, resources, and risk dramatically.  We believe that, like agile approaches across the IT landscape, the best way to implement is to build a rich, high function, point to point integration that addresses pressing business problems, usually somewhere in the revenue stream.
 
A skilled team, such as a Kitepipe integration team using Boomi, can go from project kickoff to in production in a few weeks.  The limiting schedule factor is usually client data quality and UAT issues - the development of even a complex point to point integration can be done in a week.
 

Schedule A Boomi Cloud Integration Architecture Workshop with Kitepipe

If your application architecture is increasingly fragmented, and you believe that your organization can gain significant benefits from implementing a light-weight but effective integration architecture, we should talk.  Our customers have benefitted from an on-site integration architecture workshop, which includes:
  • Review of Boomi platform and toolset
  • Examples of mature Kitepipe integration customers
  • Tools to sell integration internally
  • Review of your application landscape
  • Evolution model to implement integration in your environment
  • Deep dive into an integration scenario 
 
The limiting schedule factor is usually client data quality and UAT issues - the development of even a complex point to point integration can be done in a week. This is one of the many reasons why the Boomi process is so successful. It can be completed in a short period of time. 
 

© 2024 Copyright Kitepipe, LP. ALL RIGHTS RESERVED.