The post Weathering the Storm: Disaster Recovery and Business Continuity as a Service (DR/BCaaS) in 2024 appeared first on Gigaom.
]]>This is where disaster recovery (DR) and business continuity (BC) come in, ensuring your operations keep humming along even amid chaos. And with the growing popularity of as-a-service solutions, you can now access these critical services without the hefty upfront investment or extensive expertise needed for traditional in-house implementations.
But 2024 brings a twist: artificial intelligence (AI) is rapidly weaving itself into the fabric of DR and BC planning. Let’s explore how this dynamic duo is changing the game.
Imagine a system that anticipates disruptions before they happen, automatically executes pre-defined recovery processes, and learns from each incident to optimize future responses. That’s the power of AI in DR and BC. Here are some key ways it’s making a difference:
The DR/BCaaS landscape is brimming with solutions. Here are some of the leading players with innovative AI-powered offerings:
Choosing the right vendor depends on your specific needs, budget, and technical expertise. Look for providers with robust AI capabilities, proven track records, and transparent pricing models.
The future of DR/BCaaS is collaborative, automated, and predictive. AI will play a central role, constantly evolving and learning to safeguard your business against ever-evolving threats. Remember, investing in a DR/BC solution isn’t a frivolous expense, it’s an insurance policy against unforeseen risks. With the right as-a-service solution, you can weather any storm with confidence, ensuring your BC and resilience in the face of the unknown.
Additional Tips:
By embracing DR/BCaaS and harnessing the power of AI, you can confidently navigate the uncertainty of the future and ensure your business thrives, no matter what comes your way.
To learn more, take a look at GigaOm’s DR/BCaaS Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post Weathering the Storm: Disaster Recovery and Business Continuity as a Service (DR/BCaaS) in 2024 appeared first on Gigaom.
]]>The post Unlocking the Future of Edge Computing: The Pivotal Role of Kubernetes in Navigating the Next Network Frontier appeared first on Gigaom.
]]>The diversity of edge devices, ranging from low-power, small form factor multicore devices to those with embedded GPUs, underscores a tremendous opportunity to unlock new network capabilities and services. Edge computing addresses the need for real-time processing, reduced latency, and enhanced security in various applications, from autonomous vehicles to smart cities and industrial IoT.
In my research, it became evident that the demand for edge connectivity and computing is being addressed by a diverse market of projects, approaches, and solutions, all with different philosophies about how to tame the space and deliver compelling outcomes for their users. What’s clear is a palpable need for a standardized approach to managing and orchestrating applications on widely scattered devices effectively.
Kubernetes has emerged as a cornerstone in the realm of distributed computing, offering a robust platform for managing containerized applications across various environments. Its core principles, including containerization, scalability, and fault tolerance, make it an ideal choice for managing complex, distributed applications. Adapting these principles to the edge computing environment, however, presents special challenges, such as network variability, resource constraints, and the need for localized data processing.
Kubernetes addresses these challenges through features like lightweight distributions and edge-specific extensions, enabling efficient deployment and management of applications at the edge.
Additionally, Kubernetes plays a pivotal role in bridging the gap between developers and operators, offering a common development and deployment toolchain. By providing a consistent API abstraction, Kubernetes facilitates seamless collaboration, allowing developers to focus on building applications while operators manage the underlying infrastructure. This collaboration is crucial in the edge computing context, where the deployment and management of applications across a vast number of distributed edge devices require tight integration between development and operations.
With common deployment in sectors like healthcare, manufacturing, and telecommunications, the adoption of Kubernetes for edge computing is set to increase. This will be driven by the need for real-time data processing and the benefits of deploying containerized workloads on edge devices. One of the key use cases driving the current wave of interest for edge is the use of AI inference at the edge.
The benefits of using Kubernetes at the edge include not only improved business agility but also the ability to rapidly deploy and scale applications in response to changing demands. The AI-enabled edge is a prime example of how edge Kubernetes can be the toolchain to enable business agility from development to staging to production all the way out to remote locations.
With growing interest and investment, new architectures that facilitate efficient data processing and management at the edge will emerge. These constructs will address the inherent challenges of network variability, resource constraints, and the need for localized data processing. Edge devices often have limited resources, so lightweight Kubernetes distributions like K3s, MicroK8s, and Microshift are becoming more popular. These distributions are designed to address the challenges of deploying Kubernetes in resource-constrained environments and are expected to gain further traction. As deployments grow in complexity, managing and securing edge Kubernetes environments will become a priority. Organizations will invest in tools and practices to ensure the security, compliance, and manageability of their edge deployments.
When preparing for the adoption and deployment of Kubernetes at the edge, organizations should take several steps to ensure a smooth process. Although data containers have been around in some form or fashion since the 1970s, modern computing and its use of Kubernetes orchestration is still early in its lifecycle and lacking maturity. Even with its status as the popular standard for distributed computing, the use of Kubernetes in industry has still not hit adoption parity with virtualized computing and networking.
Business Requirements
Enterprises should first consider the scale of their operations and whether Kubernetes is the right fit for their edge use case. Deployment of Kubernetes at the edge must be weighed against the organization’s appetite to manage the technology’s complexity. It’s become evident that Kubernetes on its own is not enough to enable operations at the edge. Access to a skilled and experienced workforce is a prerequisite for its successful use, but due to its complexity, enterprises need engineers with more than just a basic knowledge of Kubernetes.
Solution Capabilities
Additionally, when evaluating successful use cases of edge Kuberentes deployments, six key features stand out as critical ingredients:
How a solution performs against these criteria is an important consideration to take into account when buying or building an enterprise-grade edge Kubernetes capability.
Vendor Ecosystem
Lastly, the ability of ecosystem vendors and service providers to manage complexity should be seriously considered when evaluating Kubernetes as the enabling technology for edge use cases. Enterprises should take stock of their current infrastructure and determine whether their edge computing needs align with the capabilities of Kubernetes. Small-to-medium businesses (SMBs) may benefit from partnering with vendors or consultants who specialize in Kubernetes deployments.
Best Practices for a Successful Implementation
Organizations looking to adopt or expand their use of Kubernetes at the edge should focus on three key considerations:
The integration of Kubernetes into edge computing represents a significant advance in managing the complexity and diversity of edge devices. By leveraging Kubernetes, organizations can harness the full potential of edge computing, driving innovation and efficiency across various applications. The standardized approach offered by Kubernetes simplifies the deployment and management of applications at the edge, enabling businesses to respond more quickly to market changes and capitalize on new business opportunities.
The role of Kubernetes in enabling edge computing will undoubtedly continue to be a key area of focus for developers, operators, and industry leaders alike. The edge Kubernetes sector is poised for significant growth and innovation in the near term. By preparing for these changes and embracing emerging technologies, organizations can leverage Kubernetes at the edge to drive operational efficiency, innovation, and competitive advantage for their business.
To learn more, take a look at GigaOm’s Kubernetes for edge computing Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post Unlocking the Future of Edge Computing: The Pivotal Role of Kubernetes in Navigating the Next Network Frontier appeared first on Gigaom.
]]>The post Unleash Your Unstructured Data: A Strategic Playbook appeared first on Gigaom.
]]>UDM solutions offer a powerful set of features that fundamentally transform the way organizations interact with their unstructured data:
UDM is already driving transformation across numerous industries. Here’s just a glimpse of its impact:
CTOs, CIOs, and Chief data officers are the organizational champions of UDM initiatives. Key considerations to consider:
Through my research on UDM, I’ve seen firsthand how the landscape is evolving. The integration of AI and ML is set to play a transformative role in unstructured data management. Increasingly sophisticated AI models will automate tasks, uncover deeper insights, and streamline workflows, optimizing the entire data management process. This AI-powered transformation will extend the boundaries of what’s possible with unstructured data.
As concerns around data sovereignty intensify, solutions offering agility, multicloud support, edge computing options, and on-premises deployment will become increasingly sought after. Organizations demand complete control over their data, and UDM solutions must adapt to provide the flexibility needed to meet strict requirements and adapt to changing regulations.
Security remains a paramount concern in the world of unstructured data. UDM solutions will evolve to incorporate even tighter security measures. Expect to see seamless integration with anomaly detection, automated data classification, and streamlined compliance tools that ensure this often sensitive data is meticulously protected.
The future of UDM is exciting and dynamic. As the field matures, we can expect further innovations that empower organizations to truly unlock the power of knowledge hidden within their unstructured data, driving transformation across countless industries.
To learn more, take a look at GigaOm’s UDM Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post Unleash Your Unstructured Data: A Strategic Playbook appeared first on Gigaom.
]]>The post Harnessing the Power of Cloud Performance Testing appeared first on Gigaom.
]]>Cloud performance testing is a crucial aspect of ensuring the reliability and efficiency of software applications hosted in cloud environments. Cloud computing technologies have achieved high adoption levels in many organizations, requiring key stakeholders on software teams to ensure applications can scale to meet demand.
That demand is driven not only by the volume of transactions, data, and processing work, but also by the wide range of users in various technology roles, including developers, testers, quality assurance (QA) personnel, development operations (DevOps) teams, performance engineers, and business analysts. Without performance testing tools, this demand would be far more difficult to meet and manage.
It is interesting that several of the cloud performance testing platforms have aligned with observability platform capabilities. The alignment allows teams to leverage the information from the observability solutions and allows users to compare performance test results with ongoing observability metrics to continue to fine-tune performance.
Another alignment taking place is that many cloud performance testing tools are becoming part of a larger testing suite for cloud capabilities. Some are becoming a part of the “shift left” trend, adding performance testing earlier in the development cycle or as a part of DevOps testing. This definitely helps to alleviate the age-old issue of testing performance right before a “go live” event only to find out that the application has performance issues. In addition, the DevOps testing suites provide reuse of functional and performance test scripts throughout the development process.
The movement to shift testing left into the development lifecycle rather than post-process—after deployment to a production or stage environment—means that business leaders can understand the performance and cost of a feature earlier in the lifecycle and better address prioritization of work (backlog items) to ensure the final solution optimizes the business value of a solution.
Ease of use for both technical and non-technical users continues to be a huge focus for vendors and companies alike. It is driving a key movement in the democratization of load testing and automated test creation. Most solutions either offer GUIs to manage tests or provide methods to record users navigating the application to create load-testing scripts automatically.
Over the next few months, I anticipate that most solutions will continue to focus on ease of use, including the use of generative AI in test creation and assessment and for test results summarization and analysis. Another ongoing focus is allowing more testers of all types to create and run tests effectively throughout the development process and ongoing operation.
From a cost perspective, these solutions continue to offer a cost-effective tiered approach to licensing to meet the needs of companies of any size. In addition, most provide some form of pay-as-you-go options for testing only as it is needed.
Cloud performance testing—with its scalability, cost-effectiveness, global reach, and automation—empowers teams to deliver high-quality performance in the cloud that meets and exceeds user expectations.
To learn more, take a look at GigaOm’s cloud performance testing Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post Harnessing the Power of Cloud Performance Testing appeared first on Gigaom.
]]>The post Chaos Theory and Observability appeared first on Gigaom.
]]>IT chaos is a function of monitoring, observability, and intelligence. Yes, I added intelligence, but I’m not talking about artificial intelligence (AI)—yet. Just as monitoring has generated more data than humans can consume, observability can produce more observations than anyone can understand. The overload of observation information is particularly true when multiple observation tools come into play.
Machine learning can help, but the questions we want to answer are changing. Once, we wanted to know if services in a public cloud worked and how to merge that data with the on-premises noise. Now, the questions have changed to what to do about the observations. Automation allows restarting poorly performing items and expanding memory or computing power on demand, but you have to store the data somewhere, and storage is not free. Leading observability solutions now include real-time cost comparisons between cloud vendors. The best observability tools have financial operations (FinOps) abilities to find underused, overused, and abandoned resources in clouds (public or private).
Observability tooling has enough data to predict future states. Unfortunately, chaos theory does not help. Data at the element level does not exist at the observability level. Regression analysis, least-squares fits, and more complicated algorithms allow the prediction of chaos. The more data available, the more accurate the predictions, but storing data is costly. Vendors are addressing the issues with consumption-based licensing, lower-cost storage tiers, and other methods to deal with the wave of data needed for observability.
IT chaos will never end, but at least we can try to manage it. The new hope is generative AI (GenAI)—maybe.
The chaos function contains the steps from monitoring to observability to intelligence and requires new approaches to answer questions. Monitoring tells us the state of items, observability can create relationships and provide a meta view of the elements, and intelligent questions are possible with the help of GenAI.
Ask an observability tool when the next outage will occur, and you may get an answer. Ask it to automate a known failure mode, and it performs a perfect dance. Ask an observability tool if the enterprise is OK, and you get nothing. The question is beyond its capabilities. Observability tools as they exist today focus on IT, including developers in DevOps pipelines, operations management team members working to keep the lights on, and the newly coined (by my more than 40-year standard) system reliability engineers (SREs). Observability explains the data from monitoring.
Enter GenAI, the big rock in the pond creating its version of chaos. In chaos theory, a single element can tip an entire system over the edge. The math makes this abundantly clear (I’ll get to that in a moment). So, what happens next?
GenAI is already improving IT, from better chatbots to consuming all the data and providing remarkable insights. Yet GenAI is brand new and disruptive. Few observability vendors are using it to significant effect now, and a smaller number can predict the impacts in 24 to 26 months.
Observability can slow the devolution into chaos, pointing to a calmer IT environment with GenAI somewhere in the future. Actual intelligence for the enterprise comes when GenAI consumes data from every source in the company, allowing unthinkable questions and a future where the tsunami of GenAI-created change does not disrupt the company.
I’ve mentioned chaos theory a few times. Let’s look into what it is. Chaos theory is a popular trope that allows writers to invent seemingly impossible situations the protagonists must overcome or to base an entire story concept on moving a single item. If any large-scale, easily conceived system can be said to embody chaos, then information technology stands out. Chaos is the normal state of IT, particularly in large enterprises. I’m going to lay out the math for you.
Hold on. Why am I writing about mathematics in an IT blog?
I’m a physicist, and though I’ve been doing IT for over 40 years, I rely on my education for even the most mundane things. Observability and chaos theory are related—the how and why are essential when we look at the entire enterprise. I could have used entropy, but chaos theory is sexier and closer to the reality of an IT ecosystem. Now, to the esoteric math discussion.
Chaos theory has equations that help mathematicians and physicists analyze the systems under study. In 1975, Robert May created a model to demonstrate the chaotic behavior of dynamic systems. I have modified May’s model for incidents:
In+1 = r • In • (1 – In)
In another version of Earth, I can simulate every IT element to identify systems and processes on the precipice of chaos and magically heal them. IT does not create dinosaurs, except in the form of mainframe computers running COBOL.
OK, that isn’t happening, but I can monitor all those elements and gather state information (on or off), metrics (memory usage, CPU performance), and more. Then I can send all that information to a team to determine the system’s chaos level and respond accordingly.
Oops, BAM! We have another data glut (monitoring often accounts for 25% of network traffic in a large enterprise).
Observability strives to infer a system’s internal state from its external outputs. We have scads of data but no idea what it means. Observability tooling, whether specifically for public and private clouds, networks, storage, or applications, is a view into the chaos.
May’s equation and observability intersect. Here’s how:
Technology impacts us almost everywhere—doctor visits, the news, social media, refrigerators, and even our cars (including gas-powered vehicles). The change in a single parameter can bring a company to its knees. Ask AT&T about a simple configuration change that brought their entire network down. Look into how British Airways had to cancel hundreds of flights because a software component failed after a simple change.
IT systems are always on the precipice of chaos. Observability tools are one way to examine every IT enterprise’s chaotic state.
To learn more, take a look at GigaOm’s cloud observability Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post Chaos Theory and Observability appeared first on Gigaom.
]]>The post The Time is Right to Review Your Enterprise Firewalls appeared first on Gigaom.
]]>Well, no. If you think the enterprise firewall market is staying still and not worth a deeper look, you may be missing out.
In the last few months, I’ve done more work in the firewall and connectivity space than I had for a long time. What I discovered was that firewall vendors are delivering some game-changing innovations in their solutions. Not that this should be a huge surprise—the reality is our organizations have changed significantly in recent years, driving new demands and, of course, new risks. This has made innovation necessary. And these innovations are more than cool new features or new “nerd knobs” to tweak. They are changes that can, in turn, help drive innovation in the way organizations operate and deliver IT services, supporting improved security and business transformation.
Simply put, it’s the cloud. The cloud has changed much of the way we do all our computing tasks, and we do them now at cloud scale. Enterprise firewalls are no different. Responding to today’s threats requires that sort of scale, not only for the ability to gather vast amounts of telemetry but also for what it allows us to do. Cloud compute enables security vendors to work through this telemetry to provide analytics and intelligence that we can’t get any other way. Vendors are using this cloud intelligence to enhance firewall security offerings. Solutions are being integrated with cloud intelligence platforms to offer rapid, accurate threat detection and response across areas like domain name system (DNS) security and zero-day vulnerability detection, and to provide enhanced defense against DDoS and other attacks.
Connectivity and Access
The modernization of communications is something many enterprises are considering. Low-cost, high-speed internet access is driving companies to move away from inflexible and expensive traditional WAN connections. Access demands have also changed, with traditional VPNs lacking scale and often offering a poor user experience.
This has spurred major changes from vendors, including the addition of software-defined wide area networks (SD-WAN) and zero-trust network access (ZTNA) to leading solutions.
The Move to Cloud-Based Security
One of the biggest changes in the firewall market is the move to secure access service edge (SASE). SASE brings a cloud-native approach to dealing with the security, connectivity, and access capabilities traditionally provided by enterprise firewalls, endowing them with the scale and capabilities the cloud provides. All of the major firewall providers see SASE as fundamental to their strategy going forward. To be clear, this doesn’t mean they are going to de-emphasize their firewalls, but they are all increasingly integrating them with these large-scale, cloud-based security solutions.
This is a big win for the enterprise, as it gives them the opportunity to add cloud benefits directly to their firewall strategy today. Moreover, for those considering SASE adoption, it provides a smooth on-ramp that lets them plan for and migrate to SASE architecture in the future.
Does this mean that firewalls are going away? Absolutely not. Firewalls will continue to be needed by small businesses and huge enterprises—by any organization that needs 100s of Gbps throughput for their data center. But it is also clear that the additional capabilities modern enterprise firewalls can deliver bring great opportunities for organizations to transform their security and communications operations to provide better performance, tighter security, and lower costs.
With all this said, let’s not forget that new firewall projects are complex and difficult, and come with the risk of disruption. But don’t let this keep you from at least reviewing the space because it is full of innovation that can help businesses transform with a host of new capabilities that provide the security needed in the modern world. So, now is as good a time as any to take another look at your firewall strategy.
To learn more, take a look at GigaOm’s enterprise firewall Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post The Time is Right to Review Your Enterprise Firewalls appeared first on Gigaom.
]]>The post Unlocking Efficiency with Enterprise Process Automation appeared first on Gigaom.
]]>The convergence of intelligent document processing (IDP), robotic process automation (RPA), and business process management (BPM), along with AI, has driven a massive change in how companies and software vendors envision the product life cycle. The continued acceleration of digital transformation is leading to an increased interest in intelligent automation platforms. These solutions are increasing efficiency and cutting costs in several industries.
Market Overview
Vendors in this market today are taking a holistic approach to move beyond siloed IT and business process automation. The goal is to provide highly integrated solutions containing RPA, IDP, BPM, and process and task mining, using AI throughout the platform. The focus is on gaining greater business value and organizational agility in a highly efficient operation.
Most vendors are adding and boosting functionality to enhance no-code and low-code capabilities and tightly integrated components. Artificial intelligence (AI) has been advancing rapidly, so look for significant AI enhancements in these solutions as this EPA market evolves. Generative AI will be one of the most interesting emerging features that vendors investigate, and we anticipate that vendors will find creative and innovative ways to leverage generative AI in the near future.
To speed adoption and time to market, many vendors offer a large number of prebuilt components, templates, bots, and integrations. Some provide plug-and-play options, while others offer customization.
Buyer Requirements
Ideally, prospective customers should have a hyperautomation strategy to help guide the requirements of the solution. However, it is not necessary to have a complete vision to gain benefit from these tools. IT and business leaders should work together to determine use cases for an implementation roadmap that takes into account data readiness, ease of integration, ROI, and upstream and downstream teams, systems, and processes that are impacted.
Industry-specific and departmental functions are also key elements that need to be understood in order to compare the tools. Potential buyers should compare requirements and vendors with an eye toward performing a proof of concept (POC) with the vendor. A POC will help ensure the solution will work for an organization. It can also indicate how easy to use and to implement a solution may be.
EPA is not just about efficiency; it’s about transforming organizations and driving value through technology-driven processes.
To learn more, take a look at GigaOm’s EPA Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post Unlocking Efficiency with Enterprise Process Automation appeared first on Gigaom.
]]>The post CIEM: Bridging the Gap Between IAM and Cloud Security appeared first on Gigaom.
]]>In the world of on-premises storage and computing, most accounts accessing enterprise systems are attached to human entities. Solutions have been developed to ensure good governance of these identities and their access privileges during their lifecycle in the enterprise. After a relatively short time, companies that have adopted IAM solutions have been able to control who has access to what and for what reason.
Then cloud hosting and computing arrived with promises of reducing the acquisition, operation, and maintenance costs of enterprise IT systems. Cloud hosting and computing also promised gains in operational agility and flexibility of IT tools. This promise, of course, is real and the gains are indeed achievable. However, the concepts of identity, entitlement, and privileges inherent in the cloud are no longer the same as they are for on-premises infrastructure.
In 2020, the term cloud infrastructure entitlement management (CIEM) appeared for the first time. CIEM, as a concept, has emerged to address all the new use cases specific to cloud computing. Some might consider CIEM as the natural extension of IAM into the cloud. But CIEM helps organizations to contend with the growing number of non-human identities, whether they are internet of things (IoT) object machines or software acting in the cloud, as well as ephemeral identities that require rights and access only for short periods. Additionally, CIEM solutions help reconcile the actions of these different types of identities across the various cloud platforms of the enterprise, as each cloud service provider (CSP) has its own vision of IAM in its platform.
There are three main categories of CIEM solution providers:
The market is still young in terms of both CIEM solution providers and CIEM functionalities themselves. Regarding CIEM solution providers, consolidations are underway, notably precipitated by the move of CIEM-centric companies into the realm of larger and more diversified IT players.
When considering a CIEM solution, several important factors should be kept in mind:
To learn more, take a look at GigaOm’s CIEM Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post CIEM: Bridging the Gap Between IAM and Cloud Security appeared first on Gigaom.
]]>The post The Cloud Networking Polysemy appeared first on Gigaom.
]]>This creates a lot of confusion; at least it did for me. It’s a prime example of the anchoring bias, whereby I initially defaulted to the first definition of cloud networking that I came across.
In this blog, I’ll lay out all the versions of this technology that I’ve identified over the past three years to help buyers find the right solution for their business requirements instead of stumbling over semantics.
Vendors can be categorized as follows:
In their most basic forms, all of these takes on cloud networking can be used for the following use cases: networking inside the public cloud, networking between on-premises and public clouds, networking native to the public cloud as part of the infrastructure stack, cloud networking professional services, and networking in the data center that extends to the public cloud.
Firstly, none of these approaches or definitions are wrong. Likewise, employing the same “cloud networking” term to describe multiple approaches isn’t an issue either. The problem arises when:
The market ended up in this situation because a range of different vendors were solving similar yet different customer problems, and they all used the same label when advertising their solution. While third parties can do their best to guide buyers navigating new markets, they too are subject to vendors’ messaging. That’s why I believe that the power of establishing clarity originally lies with the vendors and their technical marketing departments.
Before beginning their search, I recommend that prospective customers reference the list above that is applied across today’s vendor landscape. A vendor-based approach is much more useful because engagements with vendors whose profile and use cases don’t fit your organization’s needs will most likely be unproductive.
To learn more, take a look at GigaOm’s Cloud Networking Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post The Cloud Networking Polysemy appeared first on Gigaom.
]]>The post Springing the Trap: Snagging Rogue Insiders with Deceptive Tactics appeared first on Gigaom.
]]>Deception technology has emerged as a new and innovative approach to safeguard against insider threats. Deception platforms set up decoys and traps that attract insider attention by masquerading as real sensitive company resources. When an insider attempts to access the deceptive assets, alerts are generated to cue incident response teams. By providing fake systems and data that appear credible to insiders, deception technology can shine a spotlight on unauthorized insider actions that point to a potential compromise. Evaluating this new form of defensive technology is critical for organizations seeking improved safeguards against the insider threat challenge.
Malicious insiders often undertake stealthy approaches to unauthorized activity that enable them to fly under the radar. Because they have legitimate access and credentials, insiders don’t need to hack systems and often access sensitive resources as part of their regular job duties. These factors make distinguishing innocent actions from illicit insider activities extremely tricky for security teams.
Additionally, malicious insiders are adept at understanding their organizations from the inside out, as they possess intricate knowledge of systems, data stores, security processes, and potential loopholes. This arms them with the blueprints to undertake surgical, stealthy attacks that easily evade traditional safeguards. They can slowly siphon small amounts of data over time, camouflaging their tracks along the way, or they may lay low for months or years before acting, all while maintaining a flawless job performance.
Many security tools and policies are designed to guard against outside attackers but fall short when applied to insider threats. Firewalls, network monitoring, access controls and other perimeter defenses can be neatly circumvented by insiders since they don’t flag authorized account activity. New approaches are sorely needed to account for the particular challenges posed by the insider threat. Deception technology has risen to fill this critical security gap, providing alerts and visibility where other controls fail blind.
Deception technology platforms provide specially designed traps and decoys that ensnare malicious insider activity. The systems create fake digital assets, including documents, emails, servers, databases, and even entire networks that appear credible to insiders but contain no real production data. When an insider takes the bait and attempts to access these decoys, alerts are triggered automatically.
Well-constructed deception environments emulate the actual IT infrastructure and include decoys that very closely mirror the type of sensitive or regulated data an insider might target. For example, a healthcare company could deploy deceptive personal health information (PHI) records with bogus patient data that is formatted identically to real electronic health records. The goal is to make the lures attractive enough for malicious actors to fully engage.
Advanced deception solutions don’t just generate alerts; they also provide monitoring around deceptions to capture evidence for investigations. Detailed forensics reveal exactly what the insider looked at, downloaded, manipulated, or destroyed, documenting their footsteps throughout the attack sequence. Some solutions will even feed this threat intel back into other security systems like SIEMs to accelerate incident response.
Deception tech therefore delivers immense value to insider threat programs through early detection, detailed visibility, and low false positives. By employing deception, organizations gain an additional line of defense against malicious insiders that traditional tools miss. What sets deception apart is the ability to convert authorized users into threats when they take unprecedented action on deceptive assets.
Implementing deception technology for effective insider threat detection requires planning and strategy. The systems provide maximum value when decoys closely resemble the true confidential data insiders have access to and integrate with other key security tools.
For example, a decoy document labeled “ACME Merger Strategy” will carry more weight if positioned among other documents that a financial analyst would normally review for their job duties. Similarly, bogus engineering diagrams of forthcoming products can trap insider threats within an R&D department when mixed alongside real confidential schemas stored on the network.
Insider threat programs will often conduct assessments to map sensitive data and identify which systems legitimately house this information across the organization. Deception technology can then mirror real assets and place lures accordingly. Since insiders are authorized to access parts of the corporate network, logging into a system alone should not trigger alerts. Instead, alerts should activate only when deceptive content is touched.
Thoughtful deployment strategies maximize the likelihood that insider threats interact with lures and spring deception traps. Network architects may advise on critical network sites for trap insertion while cloud security teams advise on risky SaaS apps prime for deception across business units. Properly integrated, deception solutions reinforce the entire security ecosystem.
Insider threats present an undeniably tricky challenge for organizations to overcome. Malicious insiders operate behind the lines, often able to circumvent traditional perimeter defenses like firewalls, IDS, and access controls. Their intimate knowledge, authorized credentials, and internal vantage point equip insiders to undertake stealthy, tactical strikes against sensitive systems and data.
By proactively baiting malicious actors and generating alerts when they interact with fakes, deception technology delivers a robust new capability for protecting an organization’s crown jewels against compromise. As part of a defense-in-depth approach, deception solutions reinforce the security stack, serving as the innermost layer of protection against lurking insider danger.
Deception technology provides a uniquely potent counter to these threats by laying traps internally behind perimeter defenses. Well-positioned decoys and lures offer tremendous value for insider threat programs seeking early detection, enhanced visibility, and rapid response to unauthorized activity. As the cybersecurity industry wakes up to the rising danger of insider attacks, deception platforms offer a rapidly maturing solution to this menacing threat.
To learn more, take a look at GigaOm’s deception technology Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post Springing the Trap: Snagging Rogue Insiders with Deceptive Tactics appeared first on Gigaom.
]]>