Data and Operations Management
Virtualization and cloud computing are not just innovations that require the support of new environments in existing operations management solutions. Instead, virtualized and cloud based environments are so different from their predecessors that an entirely new management stack will have to be built in order to effectively manage these environments. This new stack will be so different that it will replace, instead of augment the legacy/incumbent management stacks from legacy vendors. This ushers in the era of Big Data Operations Management.
When do you need Big Data Operations Management?
The short answer is that depending upon who you are and what you are trying to do, Virtual Operations Management may or may not be a big data problem. If your environment is reasonably small, and you are running applications where neither they nor their supporting infrastructure require high fidelity monitoring and you are not changing the environment very frequently then Virtual Operations Management can be addressed without using big data approaches.
However, if your environment is large (let’s say 1, 000 physical hosts or more) and you are running business critical and performance critical applications in that environment, then you are going to want high fidelity monitoring of that environment. This is particularly true if short outages or brownouts are a severe business problem with those applications. If this is the case, then highly granular and extremely frequent data collection will be critical, and that you will want to focus on the following factors:
- Real time data collection. Every five minutes is not good enough. Even if that five minute data point is an average of 15 samples collected every 20 seconds over that 5 minute period. You are going to want something that collects the data from each vSphere host itself every 20 seconds. Ideally, even more frequently although the 20 seconds is a current limitation of vSphere. The key question that you want to ask here is, “how long am I willing to wait in between the time that something bad happens, and the data collection system in my management product notices it”? For online business critical applications the answer might be no more than one second.
- Real time event processing. Once the management system has collected the data, and has collected the presence of a problem, how long does it take for that system to be able to raise an alert or take an action? This is that many products refer to as “real time”. But most of them ignore the delays in collecting the data in the first place mentioned above.
- Comprehensive data collection. This means not missing events, or missing peaks in response time or latency or drops in thoughput. It also means broadening the waterfront of what is collected to include data from a variety of sources.
- Deterministic data collection. This means getting as close to getting the actual value that matters and not an estimate. Averaging occurs at all levels of the data collection stack. Operating systems inherently sample and provide either periodic samples or rolled up averages of multiple samples. Averaging at any level obscures valuable data and can seriously mislead one into thinking everything is OK when it is not.
Intuitive backlink monitoring.
You might also like
Enterprise Analytics: Optimize Performance, Process, and Decisions Through Big Data (FT Press Operations Management)
Book (Pearson FT Press)
Usina de Sinop usa ERP da SAP — Baguete (liberação de imprensa)
A Companhia Energética Sinop S/A (CES), detentora da concessão de exploração de energia gerada pela Usina Hidrelétrica Sinop, atualmente em construção no Mato Grosso, implementou um sistema de gestão da SAP com consultoria da Red&White.