Build a big data characterization engine that simulates massive volumes of data to ensure timely availability of network, storage and compute capacity
With the ever growing Big data foot print, enterprises are faced with need for additional storage, network, and computing capacity. Many are struggling to keep up with or anticipate the capacity needs for the unprecedented data volumes and power needed for synthesizing, analysis and storage. A smart, intelligent and automated tools can help with planning and timely buildout of infrastructure to match the data deluge.
Build a plug and play framework to measure and analyze experience attributes associated with the data acquisition and aggregation tools. It starts with a control UI to simulate generation and injection of data using tunable input workflows to measure and visualize possible gateway bottlenecks. Additional capabilities include actuation path and characterization on the south bound traffic, as well as data analytics in the cloud.
Technology stack: HTML5, CSS3, jQuery, Python, Python Flask, MQTT-Kafka Bridge, Kafka, ElasticSearch, Kibana