With competition from every side, service providers must roll out a great many new services and features in the next few years to slake the thirst of overheated markets. We’ve talked about service delivery platforms (SDPs) that can slash the time, cost and risk of doing that, not just for today’s services but by providing a platform for services we haven’t even thought of yet. SDPs also lend a hand with the cost side of the equation, which is helpful because while service providers gamely strive to reach consumers and businesses with every conceivable service on every possible device, they must also improve operational efficiency to align their cost structures with revenues. SDPs help ease capex by employing more enterprise networking devices that on average are far less expensive than telecom equipment, and because their service-oriented architecture (SOA) structures enable service providers to leverage network capacity, content and other service components from a myriad of other sources instead of having to build it all themselves. SDPs also slash opex by providing a secure pathway between content providers and users/subscribers, saving the service provider not only the cost of developing all of its own multimedia content (see capex above) but also the staffing and other overhead that would normally be associated with conducting potentially millions of service transactions via call centers.
One of the things that makes B/OSS a bit maddening, and yet makes us love it all the same, is that no single solution, nor even one type of solution, holds all the answers. And so it is that beyond SDPs, the service provider landscape is littered with a sometimes-unwelcome “diversity” of hardware, software and systems. Industry lore about 40 billing systems at Verizon or 700+ OSSs at Telstra were not just fun facts to kick around over coffee at the latest telecom show; they were true. Most large service providers still at the dawn of 2009 have multiple systems for customer relationship management (CRM), inventory and network (or “service”) resource management-including multiple systems for inside and outside plant-and every other system you can think of across the service fulfillment, service assurance and revenue assurance spectrum. As the TM Forum’s Enhanced Telecom Operations Map (eTOM) so elegantly specifies, there exist no less than 72 separate areas of management need and responsibility inside the typical service provider including things like strategy, product lifecycle management, financial management and stakeholder/external relations management that are about as far away as you can get from FAB (fulfillment, assurance and billing) or FCAPS (fault, configuration, accounting, performance and security management).
So there’s a ton of OSS/BSS out there and here’s the “even better news”: Much of the time it’s poorly integrated, and systems use and maintain overlapping information, which means not only that we’re expending duplicate MIPS all over the shop but that when bad data is lurking in a database like a ghost in the machine, it is merrily replicated throughout all systems to inflict maximum operational damage. Think of how many network devices exist in the typical service provider environment. Then consider that each and every one of them populates records in the device’s own element manager, in the network fault and performance management system and across other systems including inventory and asset management, billing/CRM and provisioning. Information quality deteriorates over time and systems fall further out of sync.
“Sounds bad, but what real impact does it have?” For starters, when information is inaccurate and inconsistent it breaks down the automation that our systems depend on to handle the massive volumes and real-time demands of our business today. Voluminous tools and systems make it difficult or impossible to cross-train and implement flexible staffing plans, so users hop from system to system navigating various interfaces trying to research, diagnose and resolve issues. Ultimately it takes more time and resources to fulfill service requests, restore services affected by outages and accurately bill for services-and we’re still talking about today’s services, before the advent of “A thousand SDP-delivered services” that is the vision for the (nearer than we think) future. It can destroy customer satisfaction at a time when it’s never been easier for customers to switch providers.
What we need, in short, is a way to ensure data integrity across all systems, “unbundled,” if you will, from the underlying systems to provide out-of-band data integrity management. It has to include elements of enterprise search and document/content management to provide a user-friendly way of getting a handle on all data sources and the ability to manipulate and identify relationships between the various data components. In much the same way as a revenue operations center (ROC) is like a network operations center (NOC) for your business-providing an end-to-end view of the organization, but from a financial perspective-a data integrity management system has to deliver an end-to-end view of your operations from the standpoint of data.
We’re looking at a genuinely new area that cuts across a number of those eTOM building blocks and managing information in a new way. When we’re considering emerging technologies (or emerging solutions combining existing technologies), our best clues come from the realm of the known. A data integrity management solution could emanate from the ranks of enterprise search providers, from giants like Google and Microsoft to smaller players such as Vivísimo and ISYS, or someday, if it actually follows through on its vision: Cisco. It could also come from one of the market’s document management leaders including DocPoint, KnowledgeTree and Syntergy-or likelier still, a company that competes in both sectors like Documentum, IBM (via its acquisition of FileNet), Oracle or SAP.
What could be, however, is less compelling than what is. Or what soon could be. In this case a company that’s in the formative stages but is being formed from the ground up to specifically address data integrity management. The company is NexGenData. Those on the launching pad are the former founding developers of respected inventory/network resource management (NRM) provider Visionael, and the product they are bringing to market, the NexGen Navigator, is designed to address the data integrity disaster looming, or already in view, for just about every large organization. NexGenData has applied techniques proven by web titans like Yahoo, Amazon and the aforementioned Google-things such as SOA and Web Services, tagging, open source and mashups-to the enterprise. Mashups, not necessarily but most often “mobile mashups,” combine the so-called Three C’s: Content, commerce and community (such as social networks). They may combine things like news feeds, weather reports, maps or traffic with corporate databases and spreadsheets to create an entirely new breed of customized business applications.
But I digress. NexGen Navigator provides browser-based enterprise search and navigation, data wiring to establish relationships between disparate data sources, definition of business and data integrity rules, business process monitoring, data integrity checking and resolution, integration of systems and applications, and more. Here’s the day-to-day payoff: A company gains enterprise search and navigation across OSS/BSS and other information stores for consolidated views of critical business entities, logical and physical resources and services. It monitors critical business processes for service installation, change or disconnection and generates alarms and other notifications when a system’s information is out of sync. That renders a level of importance and granularity to data integrity issues that in years past was reserved for network faults. Better, in our view, it validates information as it’s being entered or updated, which should head off trouble early in the game, and captures historical information regarding issues and errors in processes to point to areas for improvement going forward.
Devil’s advocate: Trendium. CEO Hanafy Meleis led Trendium to a lofty perch in the OSS market in the early 2000s with high-profile account wins, the “right” partnerships, industry awards and the like. The collapse of the telecom software market-which burnished the phrase “the market experienced a downturn” into thousands of bravely-worded news releases back then-certainly didn’t help. Yet while others such as MetaS0lv and Syndesis emerged out the other side, Trendium never quite recovered and now, after a foray into software as a service (SaaS) and a headquarters move from Florida to Texas, Trendium has repackaged itself as NetTraffic and it’s been a long time since any OSS market leader feared it on the RFP trail. One reason: Trendium was a “nice to have.” It offered, as Meleis and his product team demonstrated to us, data integrity and performance management not of the network but of the OSS itself. And while that was certainly a valuable proposition, it became a luxury most carriers simply could not afford or felt they could do without.
NexGenData is not Trendium. That is not to disparage either party; these are simply two different companies in different times. NexGenData’s founders are tech sharpshooters who know their way around the business and they are wisely positioning themselves for the next market wave around things like mobile mashups. Yet the challenge I see in front of NexGenData and others who would provide data integrity management is to ingrain themselves into the market as a “need to have.”