The post DNS Security: The Forgotten Hero of Your Cybersecurity Strategy appeared first on Gigaom.
]]>Sometimes, however, it’s the less glamorous aspects of security that often can deliver significant benefits. One such area is everyone’s favorite technology to love or hate: the domain name system (DNS) and related services. We’ve all heard the phrase “it’s always DNS” when we can’t connect to a familiar website. Part of the reason we hear this is because DNS is so fundamental to each of our day-to-day communications. DNS is one of the building blocks of internet communications; it’s the way we tie impossible-to-remember IP addresses to the easy-to-remember names we are used to. We rarely attempt to connect to a system via its address; instead, whether the system is internal or external, we will usually connect via its DNS name.
DNS is so fundamental to the way modern IT works that it’s become a key target for cyberthreat actors. A threat actor can use DNS to obfuscate a wide range of potential attacks including DNS hijacking, spoofing, and typo-squatting. These are ways to redirect users from seemingly legitimate locations and applications to malicious ones, which can be used to phish for credentials, deploy malicious code, or steal data. Bad actors also realize that, because of its critical nature, denying access to DNS will hugely impact organizations, stopping users from carrying out day-to-day tasks. Denying access to DNS services can also block access to applications and information that a business and its customers rely on. This has led to a significant re-emergence of denial-of-service (DoS) attacks focusing on DNS infrastructure.
There is, however, good news. While the foundational part DNS plays makes it a target, it also makes it an extremely strong weapon in our cybersecurity defense arsenal. It’s an often-forgotten weapon but a weapon nevertheless. At the root of this is the fact that almost all cyberattacks will start by interacting with DNS. Whether it’s a simple phishing email or the beginnings of a complex malicious code deployment or data theft, the bad actor is very likely to make a DNS call, be that to a malicious website or some kind of command and control service.
Additionally, because cyberattacks often start with DNS, that means there is highly likely to be some initial activity that will leave behind clues about a potential upcoming attack. This may be the creation of unusual domains or the registration of “typo” domains: those that are within a letter or two of the real domain name. All these actions leave clues that modern DNS threat intelligence tools can spot and can take proactive action against.
DNS security tools add value by identifying risks and potential threats at these very early stages, which we can proactively isolate and mitigate, improving security and lowering the risk of an attack on our organization.
To gain this benefit must be difficult, right? That’s the best news of all: DNS security solutions are easy to deploy, with a low-risk integration into your current environment and little if any impact on users.
DNS security falls into two categories:
Even with basic levels of protection, DNS security solutions can deliver a lot of value to an organization. For example, simply adding the protection service to the DNS resolution path means malicious domains can be quickly blocked, with new domains identified and blocked constantly. Additional filters can also be put in place to block malicious domains by content type, or by category, ensuring users are accessing only sites that are safe, secure, and appropriate. Even for our mobile users, many vendors will provide off-network protection, allowing organizations to protect DNS security regardless of where a user resides or works.
If DNS security can be so useful, why is it not a frequent topic of conversation? I guess it gets overlooked for not being that exciting! DNS has been around as long as the public internet, so it’s not as alluring a topic as AI, automated threat detection, or managed security services. Regardless, DNS security is a very powerful tool.
If you want a low-risk, high-value cybersecurity investment that will improve your security posture, then I would recommend you look into the DNS security space and understand how it can improve security, reliability, and performance. Put this often forgotten security hero to work for your organization!
To learn more, take a look at GigaOm’s DNS security Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post DNS Security: The Forgotten Hero of Your Cybersecurity Strategy appeared first on Gigaom.
]]>The post Insights from Runtime: How Kubernetes Resource Management Influences Cloud Development appeared first on Gigaom.
]]>Virtually all businesses that have existing investments in custom software wind up in some form of perpetual catch-up game with the state of the art. It’s the nature of the IT business, and it is more important to close that gap effectively than quickly. After all, we should expect to reap benefits from this continual technological advancement. In the absence of tangible rewards and improved delivery of specified outcomes, chasing the vanguard has the tendency to lead us in arbitrary directions.
One hiccup many organizations have encountered in deploying software to container clusters orchestrated by Kubernetes is unexpected increases in cost, especially cloud costs. In many circumstances, it was anticipated that such a migration might increase costs and complexity in the near term, but it was considered part of a beneficial strategic trade off, and/or it was expected that these things could otherwise be brought quickly under control.
Put a pin on the architectural roadmap to mark this spot, halfway down a long desert highway and almost out of gas. After six months of work, you’ve migrated an important API to a bank of microservices on a managed Kubernetes service. There are no new features, it costs more to run, but it is sailing way up there in the cloud. Who gets to tell the CIO?
If you find yourself in this situation, you’re not alone and there is good news. There are likely many opportunities to pare down costs without impacting performance. They range from low hanging fruits that in many cases can bring costs back to ground level within weeks to new opportunities for performance and availability increases made possible through deep insight into usage patterns.
There are a variety of great Kubernetes resource management solutions on the market that provide a turnkey solution. They can be used both as an immediate remedy if the problem is acute and as an engine for continuous improvement.
The low hanging fruit is typically a matter of fine-tuning the way workloads are configured and provisioned. In the extreme (but still prevalent) example, developers put off this “tuning” as a second phase. Resource configuration may have been an afterthought and neglected entirely, but in most cases, it really is prudent to “implement first, optimize second.”
A common response to these unexpected cost increases in the cloud has been for organizations to enact policies or bureaucratic measures to revisit how these resources have been provisioned and get a handle on the situation. While such policies may be a valuable addition to the continuous delivery process, they will not take you very far toward solving the problem. The ideal balance for meeting performance objectives without over-provisioning resources is not obvious and frequently changes.
Everyone is learning as they go and beyond high level prescriptions, it is difficult and impractical to achieve a reliable fine tune without automation. Kubernetes resource management solutions offer automatic analysis and management of resources from the starting gate and exceed the capabilities of the underlying platforms in fundamental ways.
One example is through the application of AI/ML. While the capabilities of AI/ML methodologies are burgeoning by the day, for two decades they have been a reliable way of analyzing big sets of data and spotting patterns or trends that would be difficult to intuit or identify through alternative methods. By analyzing the usage patterns, resource consumption, important metrics like CPU, memory, and even custom application events, they automatically balance Kubernetes resources in a tight loop. These tools ensure that you are paying only for what you need.
Paying only for what you need is a key advantage of cloud computing. While it may have been possible 20 years ago to manage resources in a tight loop of analysis, there would have been little benefit because capacity was necessarily provisioned to the highest anticipated need. With cloud computing, you can use as little or as much as you need when you need it. However, with some types of cloud-native development, you still need to know how much you need, and as it turns out, that’s not a readily available factor in the equation.
Realtime insight into runtime resource requirements will, over time, shift left into the development process and architectural considerations. Architecturally, it is clear why breaking down monolithic designs into portable, ephemeral, and autonomous services permits certain advantages. But it is not always clear how that is best achieved or what unknown overhead tradeoffs lurk in the darkness. Just as a debugger provides inside details to inform a running process and APM tools provide a wealth of visibility into intra operating systems, Kubernetes resource management solutions provide intimate insight into how these systems scale under real world usage. Over time, such knowledge will make its way back into the initial design of the software and help to unlock the inherent benefits of cloud computing.
To learn more, take a look at GigaOm’s Kubernetes resource management Key Criteria and Radar reports. These reports provide a comprehensive view of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post Insights from Runtime: How Kubernetes Resource Management Influences Cloud Development appeared first on Gigaom.
]]>The post Best Practices for Implementing an ASM Solution appeared first on Gigaom.
]]>The proliferation of cloud services, the explosion of internet of things (IoT) devices, and the ever-growing complexity of IT infrastructure have created a vast and dynamic digital landscape. As assets multiply and diversify, they become harder to monitor and protect, leaving organizations vulnerable to cyberthreats and attacks. Attack surface management (ASM) solutions provide cyber defense through the continuous discovery of and insight into an organization’s attack surface.
In this blog, we’ll guide you through the essentials of implementing an ASM solution, offering insights applicable to a wide range of organizations. We’ll delve into how to prepare for a successful adoption, how to choose the right solution for your organization’s needs, and we’ll highlight common pitfalls and misconceptions to help you avoid potential complications. This comprehensive approach can streamline your journey towards effective ASM implementation, ensuring a smoother and more secure integration into your cybersecurity strategy.
ASM solutions provide an in-depth discovery of digital assets, including IPs, hostnames, domains, and social media profiles. However, prior to investing in an ASM tool, it’s valuable to conduct a more general pre-ASM inventory. This preliminary step will uncover gaps in your security framework and establish a baseline for comparison. It’s an essential exercise for understanding the current state of your digital asset management.
In doing this, you not only highlight the areas needing immediate attention, but you also set a reference point to gauge the impact and value of ASM later. This initial evaluation is crucial when it comes time to decide on whether to renew your current ASM services or look for a different solution, as it offers clear insights into whether the current ASM tool is delivering tangible benefits in terms of security enhancement and resource allocation.
Implementing an ASM solution should be guided by well-defined objectives. It’s essential to establish what you aim to achieve with an ASM solution, whether it’s enhancing visibility over the attack surface, reducing the number of vulnerabilities, or improving the response time to security incidents.
These objectives must align with the broader goals of the organization’s cybersecurity strategy. Clear objectives help in prioritizing tasks and allocating resources efficiently. They also provide a framework for evaluating the success of the ASM solution implementation, allowing for adjustments and improvements in the strategy over time.
Successful adoption of an ASM solution requires engaging and collaborating with stakeholders across various departments. This includes IT and security teams as well as operations, human resources, and legal—departments that interact with IT systems and influence the organization’s security posture.
Effective communication and collaboration among these teams can foster a culture of cybersecurity awareness throughout the organization. Engaging stakeholders early in the process helps in understanding their needs, addressing their concerns, and ensuring their cooperation in implementing and maintaining ASM practices. This holistic approach is key to ensuring that ASM is viewed not just as a “techy solution to a technical problem,” but as an integral part of the organization’s overall risk management strategy.
By addressing these key areas in the preparatory phase, organizations can lay a strong foundation for a successful ASM solution adoption. This process ensures that the implementation aligns with the organization’s cybersecurity strategy and involves all relevant stakeholders, leading to a more resilient and secure digital environment.
When considering the adoption of any technology, it’s important to recognize that an organization’s needs and available resources vary significantly based on its size of business. SMBs and large enterprises have distinct challenges and priorities that influence their approach to ASM.
Adopting an ASM solution comes with its own set of challenges and potential missteps, some of which are commonly overlooked. Understanding these pitfalls is crucial for a successful ASM implementation.
One of the most frequent mistakes is underestimating the complexity and diversity of the organization’s digital assets. This includes websites, APIs, cloud services, and remote devices. Organizations often fail to fully comprehend the scale and intricacy of their digital footprint, leading to gaps in their ASM strategy. A comprehensive understanding of these assets is critical for effective monitoring and protection. Without it, significant vulnerabilities may remain undetected and unaddressed.
Another common oversight is neglecting the fluid nature of ASM. Attack surfaces are not static; they evolve constantly as new technologies are adopted and as existing systems are updated or decommissioned. Consequently, ASM is not a one-time activity but requires ongoing updates and maintenance. Organizations sometimes overlook this need for continual vigilance, leading to outdated security postures and increased risk exposure.
Perhaps the most serious strategic misstep is the failure to align ASM efforts with broader business objectives. ASM solutions should not operate in a vacuum but rather be an integral part of the organization’s overall risk management and business strategy. This means ASM initiatives should support and be informed by the organization’s goals, level of risk tolerance, and operational requirements. A misalignment here can result in wasted resources, efforts that do not effectively mitigate risk, or even initiatives that inadvertently hinder business operations.
Recognizing and addressing these common challenges and missteps can significantly enhance the effectiveness of an organization’s ASM strategy, ensuring that it protects against threats and aligns with and supports the organization’s broader business goals.
To learn more, take a look at GigaOm’s ASM Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post Best Practices for Implementing an ASM Solution appeared first on Gigaom.
]]>The post SASE: Today’s Landscape and Tomorrow’s Horizons appeared first on Gigaom.
]]>SASE emerged from the need to address the limitations of traditional, perimeter-based network architectures that were not designed for cloud-centric, mobile-first business environments. Representing the fusion of network and security services into a single, unified cloud service, the core components of SASE work in concert to deliver a consistent, secure, and optimized network experience.
The six essential elements of SASE include cloud access security broker (CASB), firewall as a service (FWaaS), secure web gateway (SWG), software-defined wide area network (SD-WAN), zero trust network access (ZTNA), and centralized management. CASB offers visibility and control over data in the cloud, and FWaaS extends firewall capabilities to the cloud. SWG provides safe internet access, while SD-WAN enables dynamic path selection for traffic across multiple transport services. ZTNA, perhaps the most critical component, enforces the principle of “never trust, always verify” by granting access based on the identity of users and devices. Providing visibility and control, a centralized control plane integrates all SASE components, enabling the management and enforcement of consistent security policies across the entire network.
By combining these components, SASE delivers a unified solution that ensures secure connectivity for users, regardless of their location, and applications, irrespective of where they reside. The integration of these components within a single framework enables organizations to implement consistent security policies globally, improve scalability and flexibility, and reduce the complexity associated with deploying and managing multiple security solutions.
Organizations are recognizing the benefits of SASE in providing scalable, flexible, and secure access to applications and data regardless of location. Driven by the increasing adoption of cloud services and the shift to remote work, the current market size and growth rate reflect this trend, with forecasts indicating continued expansion.
SASE is instrumental in facilitating secure remote work by providing seamless access to corporate resources. The COVID-19 pandemic accelerated the adoption of remote work, and SASE played a pivotal role in enabling this transition. By providing secure, seamless access to corporate resources, SASE has become a cornerstone of remote work strategies. Furthermore, as remote work becomes a permanent fixture, the importance of SASE is only set to increase.
However, despite its benefits, implementing SASE can be challenging. Organizations often struggle with the complexity of integrating various networking and security functions and the daunting task of replacing legacy systems. To overcome these hurdles, a strategic approach that includes careful planning, partner selection, and change management is essential.
When choosing a SASE solution, organizations must decide between a single-vendor or multivendor approach depending on the specific needs, capabilities, and investments of the organization. A single-vendor approach offers simplicity and ease of management with unified policy management and granular control, enhancing visibility into network traffic, security events, and policy enforcement. In addition, organizations typically benefit from discounts, reduced operational costs, and a lower total cost of ownership with a single-vendor solution. Furthermore, when problems arise, single-vendor solutions eliminate the blame game between vendors, leading to quicker problem resolution.
A multivendor approach, on the other hand, provides the flexibility to leverage existing investments and choose best-of-breed solutions for each component of the SASE framework, which provides the flexibility to adapt to changing business needs and integrate new technologies as they emerge. Vendors specializing in specific areas of the SASE framework may also offer more advanced and focused solutions in their domain of expertise, which can be beneficial for organizations with particular security or networking needs. Furthermore, by using multiple vendors, organizations can avoid becoming too reliant on a single provider, which can reduce risks associated with vendor lock-in and provide more negotiating leverage.
Organizations must also consider the potential drawbacks of each approach. Single-vendor SASE solutions often have gaps or weaknesses in certain areas, limiting the choice and flexibility of organizations, and can make organizations heavily dependent on a particular vendor’s suite of services, leading to vendor lock-in. However, managing multiple vendors and solutions can be complex, requiring more effort to integrate and maintain the different components of the SASE architecture. Staff may also require more training to handle different components and support may be less streamlined compared to a single-vendor approach. Moreover, smaller vendors are at a higher risk of being bought out or shutting down, which could disrupt services and require the organization to find new solutions.
Since selecting the right SASE partner(s) is critical in terms of compatibility, scalability, and support, organizations must weigh these factors against their specific needs, resources, and risk tolerance when deciding between a single-vendor and multivendor SASE strategy. While each has its advantages and challenges, the choice will often depend on specific business requirements and existing infrastructure.
Looking ahead, SASE is poised for further evolution. The integration of zero-trust principles into SASE architectures is expected to deepen, and the rise of edge computing will likely influence SASE solutions, pushing security closer to where data is generated and consumed. In addition, the emergence of 5G SASE solutions will allow organizations to extend their SASE environment to protect IoT devices. Furthermore, the future of SASE points toward the convergence of SASE and AI, leveraging advanced machine learning models to improve user experience and security performance.
Moreover, while multivendor SASE solutions enable the migration to a SASE framework, the preference for single-vendor SASE solutions is expected to grow, simplifying management and reducing complexity. As a result, several networking and security vendors are investing heavily in acquiring or developing SASE components to create fully integrated, single-policy engine SASE solutions. As technology continues to evolve, SASE will become even more critical in securing the distributed digital landscape.
SASE is not just a passing trend; it is a strategic approach addressing the complex security and networking needs of today’s and tomorrow’s digital enterprises. As the landscape evolves, staying informed and adaptable to SASE trends is vital for networking and security professionals. With SASE leading the way, we’ve embarked on the journey toward a more secure, agile network edge—but it’s just the beginning!
To learn more, take a look at GigaOm’s SASE Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, you can access the research using a free trial.
The post SASE: Today’s Landscape and Tomorrow’s Horizons appeared first on Gigaom.
]]>The post Digital Transformation Not Working? Here’s a Five-Point Plan That Can Help appeared first on Gigaom.
]]>The cloud is the most obvious example. The idea you could replace all your systems and infrastructure with brand new stuff that’s easier to use, deploy, and scale—at a lower cost—has been hugely compelling. Unfortunately, the principles may be true, but then there’s the Jevons paradox: simply put, the more you have of something, the more you use it, so instead of costs lowering over time, as you might expect with increased efficiency, they increase with greater use.
As a result, boring finance types have been turning the screws on cloud spend. Moreover, at the end of 2022, interest rates started going up and money started to cost money, moving cost optimization from a “nice-to-have” to a need. Accountants are winning and innovators can’t just do whatever they like, hiding behind the banner of “digital transformation.”
If the momentum behind cloud-for-cloud’s-sake marketing is waning, it’s no surprise we’re turning our attention to the next bit of technological magic—artificial intelligence (AI)—or more accurately, large language models (LLMs). And this new manifestation has attributes similar to cloud: apparent low cost of entry, transformational potential for business, and so on.
But even as it begins its transformations, it is already signaling its doom, not least because LLMs are about processing rather than infrastructure. They create new workloads with new costs, rather than shifting a workload from one place to another. It’s a new line on the budget, a separate initiative with no real efficiency argument to sell it.
“But it has the power to transform!” cry the heralds. Sounds good, but for the last wave of digital transformation delivering, to put it kindly, sub-optimally. We’re three to four years into many initiatives, and the rubber has been hitting the road, in some cases, with much wailing and gnashing of teeth.
According to a recent PWC survey, “82% of CIOs say achieving value from adopting new technologies is a challenge to transforming.” Read between the lines and that says, “We’re not sure we’re getting what we thought out of our initiatives.” Meanwhile, consider the 100 million pounds spent by the UK city of Birmingham on Oracle-based solutions, a significant factor in the city declaring bankruptcy.
For every public example of a large technology project failure, tens more go undeclared. A technology vendor told me that it was common to hear of a business investing millions into a platform, only to quietly write it off a couple of years later. As another example, an end-user organization told me it adopted a scorched earth policy to move its infrastructure to the cloud, before rolling it back in pieces when they found they couldn’t lift and shift the entire application architecture with its manifold vagaries.
I get why people buy into the dream of massive change with epic results. I mean, I love the idea. Earlier in my career, I learned how end-user decision-makers were driven by how something would look on their CV, and that vendor sales representatives were highly focused on hitting their quarterly targets.
So, a lot of people have been duped into believing they could make these massive, sweeping changes in IT, with life-altering results. Obviously, it can work sometimes, but it isn’t a coincidence that most happy case studies come from smaller organizations because they’re of a size that can actually succeed.
Technology done right can achieve great things, but success can’t be guaranteed by technology alone. Sounds glib, right? But to the point: the problem is not the tech, it’s the complexity of the problem, plus the denialism that goes with the feeling it will somehow be different this time (cf: the definition of insanity).
Complexity applies to infrastructure—whether in-house and built to last yet frequently superseded, or cloud-based and started as a skunkworks project yet becoming a pillar of the architecture. As a consequence, we now have massive, interdependent pools of data, inadequate interfaces, imperfect functionality, and that age-old issue of only two people who really understand the system.
Unsurprisingly, simplification seems to be a massive theme among many technology providers right now—but, meanwhile, business has plenty of complexity of its own: bureaucracy, compliance issues, cross-organizational structures, conflicting policies, and politics at every level. Have you ever tried to make any substantive change to anything in your business? How did that go for you?
I am reminded of a book I once read about quality management systems—a.k.a. process improvement—by Isabelle Orgogozo. This line, while paraphrased here, has stuck in my head ever since, “You can’t change the rules of a game while playing the game.” Why? Because of the fearful and competitive nature of humanity. If you don’t address this, you will fail.
Let’s be clear—technology creates complexity, and it doesn’t even come close to solving corporate complexity. That’s the bad news. Much as we may want some corporate utopian techno-future to be enabled at the flick of a switch, and as much as we have literally banked on it (and may be doing so again, with LLMs), this is never going to happen. You may want the problem to go away with a tool, but it won’t. Sorry!
So, what to do about it? Can you transform the untransformable, slay the dragons of complexity, and overcome organizational inertia? The answer is, I know it can be done, if certain pieces are in place. Conversely, if those pieces aren’t present, you stand less chance of success. It’s a bit like the sport of curling—achieving goals is as much about removing the things that will cause failure as much as it’s attempting that perfect shot at the goal.
I know, I know–yawn. We can all fall into a rhetorical black hole if we start down the track of, well, it’s just about the business strategy, isn’t it? That’s game over. It’s always about business strategy.
But that’s the point. In their rush to digital, companies have been losing touch with the tangible. Fine that the business strategy has been digital-first, but not so fine that it has been business-second.
No car company is going to wake up tomorrow and not be a car company. We’ve all heard manufacturers say, “We just sell boxes on wheels,” but that’s a big mistake because people are buying the boxes on wheels and they don’t care about software. You might innovate on the software, but ultimately, people are buying the box.
Technology may augment, automate, and even replace in terms of what we do, but it needs to be an equal partner in why we do it.
That “all businesses are software businesses” thing only works if—and here’s the rub—we don’t treat tech as a solve-all. There is never, ever, any excuse for assuming that the answers lie in the technical, and therefore one doesn’t have to think about business goals too much. We all do it, buying stuff to make our lives better, without thinking about what it is we need first.
An easy win is to address this all-too-human trait first. So, what are your strategic initiatives? What’s getting in the way of them? Start there. Absolutely feed in what tech has the potential to do, it would be insane not to. But put business goals first.
The tool should change the business for the better, or it’s no good. So, what’s the business change you’re looking for? And how is the tool going to help you get there?
GigaOm’s Darrel Kent discusses three types of business improvements in his blog: product innovation, customer growth, and operational efficiency. Obviously, operational efficiency is a big target right now, but so is product and service innovation.
An old consulting colleague and mentor of mine, Steve, used to be brought in when major change programs had gone off the rails. He was a rough, bearded bloke from the north of England, and he would start by asking, “What’s the problem we’re trying to solve here?”
It is never the wrong time to confirm business objectives and to ask how existing initiatives align with and drive them. If the answer is complex and starts to go into the weeds, you already have a problem. Cue another human trait: our inability to change course once it is set—which is why we bring in consultants at vast expense when things have gone wrong. You don’t need to wait for that moment, however; spotting failure in advance is not failure, it’s success.
Current projects might be about cost-efficiency, rationalization, and modernization, which is laudable, but could equally indicate an opportunity lost if all you are doing is looking for savings. So, look for gaps as well, as parts of your business strategy may be underserved. Remember the axiom (which I read somewhere, once) that success comes from cutting back quickest when a downturn happens, and coming out of it fastest when things start looking up.
Let’s keep this simple: if you don’t have a target architecture, you need one. I’m not talking about the convoluted mess your IT systems are in right now, but the shape of how you want them to be. The more you can push into this technology layer—let’s call it a platform—the better.
This does fit with the adage (which I just made up), “better to be the best in your sector than the best infrastructure engineer.” Yes, you will have to bank on a technology provider or several, so put your time and effort into building those relationships and defining your needs rather than burning cycles trying to keep systems running.
As a five-year plan, look to pin down the platform as a basis for customization to meet your business goals, rather than trying to get your custom solutions into a coherent platform that you can then, ahem, customize. I could spend time now talking about multicloud plus on-premises, but I won’t, not here.
How do you know if the platform is going to deliver? Simple: you can work through my SWB (scenario-workload-blueprint) model. OK, there is no such model, I just made it up. But let’s go through it piece by piece, and you’ll see what I’m getting at.
Scenarios
First, scenarios. Think of these as high-level business stories (in DevOps language), or simply, “What do we want to do with the tech?”
Scenarios may be user facing: e-commerce, apps, and so on. Or they can be internal, linked to product development, sales and operations, or others. The point is not so much about what scenarios look like, but whether you have a list of things you can check against the platform and say, “Will it support that?”
Scenarios can also be tech-operational; for example, involving application rationalization, infrastructure consolidation, replatforming, and so on—but the question remains the same.
Workloads
In any case, scenarios beget workloads, which are the software-based building blocks needed to deliver on them. Data warehousing, virtualized infrastructure, container-based applications, analytics, and (that old chestnut) AI all fall under the banner of workloads.
By thinking about (business) scenarios and mapping to (technical) workloads, you’re
reviewing how your nascent technical architecture maps to the needs of your organization. Out of this should emerge some common patterns, hopefully not too many, which we can call blueprints. These can form the basis of the platform’s success.
Blueprints
You certainly don’t want to build everything as a custom architecture, as that brings additional costs and inefficiencies. All we’re doing here is adding a couple of steps to set scope and confirm what can run where. The result—blueprints—can then be specced out in more detail, piloted, costed with confirmed operational overheads, and reviewed for security, sovereignty, compliance, and so on.
Also, and interestingly, very little of this exercise needs deep technical expertise. We’re creating a mapping, not building a new kind of transistor. So, there’s no excuse for keeping this discussion outside the boardroom—if your board is serious about digital transformation, that is.
There’s a moment you need to bite the bullet and recognize that you can’t deliver perfection. Of course, you can still take the moonshot, but there’s a strong chance you’ll fly like Buzz Lightyear right before he crashes into the stairwell. You may smother this with a fire blanket of denial, to which I say—even if you’re still set on the summit of Everest, how about you do everything you can to get to base camp first? Several strategies can help you here, though you’ll need to work out your own combination (cf: curling):
Ultimately, the keys to digital transformation are in your hands and in the hands of the people around you. And there’s the crux: while the goal may be digital, the reality and the route are both going to be about people.
We may all want technology to “just work” but that’s like wanting people to “just change” and “just know how to make things different,” which just isn’t going to happen. Recognize this, address it head on, and the keys to digital transformation will be laid at your door.
I’d love to know how well this resonates with your own experiences, so do get in touch.
The post Digital Transformation Not Working? Here’s a Five-Point Plan That Can Help appeared first on Gigaom.
]]>The post Developing a Culture for Growth—Reflections from the Project Team appeared first on Gigaom.
]]>A great company culture is something everyone in an organization will recognize, yet it’s hard to describe in a nutshell. So much of company culture is hidden beneath the surface and isn’t tangible—it’s behaviors you encounter and values you feel, like trust and belonging.
Positive, cohesive cultures help glue an organization together and provide a springboard for its people and their talents to thrive; they foster creativity and productivity and keep talented individuals fired up with a sense of pride in the organization and their contribution to it.
On the other hand, some cultures can be toxic, demoralizing places, sapping the lifeblood from their people and hemorrhaging their best talent—not a great plan when business is tight and competition for talent is narrow. Some consider the current landscape as a war for talent.
I started working with GigaOm as a contractor just over two years ago. The company is a fully remote tech analyst firm, operating with a global mix of highly skilled employees and contractor practitioners. Growing fast and establishing a strong reputation in its sector, it had great products and was ready to adopt a little more organizational formality.
But first, GigaOm needed to build the cultural foundations on which to support its ambitions plans. Our team set out on a course to define the underpinning values that everyone in the organization could stand behind and then create an ongoing program to embed and maintain them. Importantly, GigaOm leadership wanted to ensure that its values were not merely named and then placed on a metaphorical shelf; instead, its values would be the beacons guiding the growing business in all aspects of its work.
It’s been a rare privilege to work with the GigaOm team as it builds its company culture from the ground up. In the past, my work in this area has involved working with companies to course-correct and adapt already-embedded cultures—how exciting it’s been to encounter a fresh canvas, the energy of a startup, the cross-organizational enthusiasm, and a fully invested leadership team! A promising set of ingredients.
Our team chose to use the Culture Design Canvas framework to support our work (covered in more detail below under “How We Settled on Our Culture”). Out of those efforts came GigaOm’s six values shown in the wheel below (Figure 1).
Figure 1. GigaOm’s Values Wheel
Each value includes qualifying “I” and “we” statements, helping to give meaning and personal accountability. We’ve also created policies, work processes, and communication channels to align with these values, and we feature the “value of the month” within our weekly huddle program.
Additionally, as we’re a remote workforce, we’ve leveraged tools like Slack and our evolving intranet called Gigahub to develop social, fun aspects of the culture. Some of our favorite culture-building channels are Gigafoodies, Crazy Ideas, Fantasy Football, and GigaPets.
I don’t believe it’s true that great culture cannot be built in remote or hybrid workplaces, although I would agree it needs determined, thoughtful, and intentional effort. Sure, meetups in person always add value; however, strong and close remote culture is not impossible, it’s just different.
Creating values and establishing cultural norms is just the start—maintaining values and ensuring the company is living up to them is where the real effort comes in.
To that end, our team has just completed a second round of focus groups, gathering feedback on our progress thus far and planning next-step initiatives to strengthen areas that need work. From those convos, we know an area we want to tackle next is how to embrace and unify the contractor/employee mixed workforce.
We are immensely proud of the progress made, which, without a doubt, is fueled by the belief that the leadership team and whole organization are invested in a positive culture as a major ingredient for future success.
When I was first considering working with GigaOm, an analyst described the company to me as an “innovative startup with a great product.” Of course, I was intrigued. A startup has many things going for it; it’s fast-paced and there are many opportunities for growth. It’s built around a small core group of dedicated individuals who are willing to wear many hats to produce something meaningful for its customers. But the startup is ideally a transient state. To maintain success, startups need to respond to growth, develop and streamline processes, and find the right balance of the right people in the right roles.
Once I came on board, it was clear that GigaOm was graduating from its startup phase into something bigger and more refined. Happily, we found that the process of “growing up” beyond startup status didn’t mean discarding the passion and enthusiasm that comes with starting something new.
The growth and success of GigaOm’s products and services meant that we needed to focus on organizational transformation to bolster this success with thoughtful internal change. Several areas stood out to leadership as being places we could improve, such as defining the core values of our organization, diversifying the individuals holding leadership positions, strengthening our project management office (PMO), and solidifying our people processes.
Defining organizational culture must be purposeful. While discussing where to begin with the important task of developing our values, we agreed that these values must come from across the organization. The people who would enact the culture needed to be included in the process of defining our values. With that in mind, we scheduled a series of collaborative brainstorming sessions with volunteers across the organization to hear where we were doing well, where we could improve, and what our colleagues valued in each other.
What stood out to me the most when we held these sessions was the enthusiasm that each person brought with them. They had great ideas for the culture they wanted to see, and they pointed out subtle areas of previously unspoken understanding. For example, many employees had already developed strong connections within a fully remote work environment, which was no small feat. Individuals were happy and proud to help unearth the ways they connected with their coworkers and upheld an overall sense of pride in their work.
With the input from these sessions, we were able to summarize common themes and settle upon six values that we knew we could represent and embody on a daily basis in everything we do. To keep our values front of mind, we have focused on one value each month so that we can lean into them, contemplate their impact, and find new ways to represent them.
Additionally, we knew it was important to be thoughtful in our hiring process and bring on people who could help take GigaOm to the next level. Beyond someone having the necessary experience, we needed individuals who were excited about an evolving role in a growing organization, people who would go beyond the scope of their job description to take on challenges that needed new solutions. Through our interviews, we selected candidates who matched with the values, energy, and direction of GigaOm. We were greatly rewarded! These additions to our teams have fostered spectacular results in efficiency, communication, and enthusiasm.
When I reflect on the differences from when I first started at GigaOm two years ago to what the company looks like now, I see the progress that we’ve made as well as more positive change on the horizon. Truly, change is the only constant. Our improvement is reliant on our flexibility and continued sober self-assessment. We are proud of what we’ve achieved and know that the work is far from over.
What would you implement if you were empowered with setting and guiding your organization’s culture, values, and norms? I invite you to think about and determine how you can impact your organization in these ways to help you and your colleagues thrive and evolve in positive ways.
It is my distinct honor to be part of the team at GigaOm that has put structure around these intangibles of culture, value, and norms, and is dedicated to cultivating them and keeping the organization accountable for living up to them. I am proud of the values we created and for putting them into a wheel format to demonstrate how each value is of equal importance.
The value I want to focus on today is “Seek, Welcome, and Respect Diversity.” We highlighted and celebrated this value in June to align with Juneteenth and discussions around diversity in our GigaOm community. We set the stage for a respectful discussion of differences and allowed people to feel comfortable asking questions of other people within the discussion. We celebrated the neurodiversity in our community and the ability to recognize the quieter contributions of our introverts. We asked culture questions about language, traditions, and lifestyles, and we invited external guests to participate in a Diversity in Tech Panel to garner additional perspectives from the tech industry we contribute to.
Part of my role as a culture guardian is to help create these psychologically safe environments for people to feel comfortable asking and sharing. Each time I participate in a diversity, equity, and inclusion (DEI) training or discussion, I learn from other people’s backgrounds and experiences. These opportunities highlight the importance of respect for diversity in a professional setting so we can create the best possible work environment and best version of our organization in the communities and markets we serve.
Each of our values intersects with the others, which is one reason why our values are meaningful to us as individuals and as an organization.
We can take our “Seek, Welcome, and Respect Diversity” value and link it to the other five:
While I’m excited to celebrate the progress we’ve made in defining our values, encouraging positive norms, and preserving our culture, I’m even more excited to see how we take our learnings from today and apply them to help us achieve a better tomorrow. Preserving and refining culture is an ongoing responsibility, and I am thrilled to be part of an organization and a team committed to this journey.
We’re always looking for more people to join our great team, so if you’d like to work for GigaOm, take a look at the current job listings on our careers page.
Gill Reindl
An organizational development consultant with 35 years experience gained across a variety of commercial sectors including senior leadership roles in UK higher education. An experienced researcher and project manager in areas of organizational culture, leadership development, the future of education and work. Gill has worked on several projects with GigaOm.
Nic Saunders
A tech industry enthusiast with a background in operations and working in the startup space, Nic has worked with GigaOm for two years in the areas of people operations and finance.
Elizabeth Kittner
A finance and accounting guru with a technology focus who has a passion for elevating individuals and building healthy cultures in the organizations she serves. Elizabeth is a member of GigaOm’s executive team and oversees finance and people operations. She is also an author and speaker in the areas of ethics, communication, and leadership.
The post Developing a Culture for Growth—Reflections from the Project Team appeared first on Gigaom.
]]>The post Machines as Amplifiers: Constructing Value Statements appeared first on Gigaom.
]]>For technology to deliver, it must enable people to achieve their desired goals. This means determining how to define value in a way that assures business fit at both a strategic level and in operational and organizational processes.
Let’s get to it.
Should we develop and implement the solutions we conceive of in our heads? That is as much a value question as an economic one. How can we weigh, compare, and contrast cost to value to proffer a decision? Should we consider cultural and societal impact? Sometimes, value is measured in terms of what happens if you don’t do something, rather than if you do.
At the core, a business is trying to do some fundamental things. If it’s publicly traded, executives are trying to drive shareholder value—that is, make money or save money, or save time (to make or save money). From this standpoint, being profitable is an ongoing concern.
My preferred definition of a business is exactly that—an ongoing concern. To succeed in business, you must successfully define and execute a strategy. Using business school principles, we can define a value-based strategy based on three pillars: operational efficiency, customer intimacy, and product/performance superiority.
Digital transformation is nothing more than those three things: business strategy leveraging digital technology and delivery as a primary lever. Tying a solution to one (or more) of these three pillars provides the relevance required to convince decision-makers of fit for purpose within their business strategy.
For providers, trade-offs and leapfrogs can be translated into sales value. For instance, I can create a technology or system that will provide 100% data availability, or unfettered performance, or unlimited capacity. But what must I trade off to provide that, and what leapfrog technology must I use—or convince you of its value—to get you to accept it?
At a customer executive conference over 20 years ago, we asked the audience to consider what they could accomplish if we, as an industry, could provide them seemingly infinite data storage capacity, network bandwidth, and compute resources. Today, we are on the brink of achieving that aspiration. In many ways, we are already providing it.
Even so, why would a business buy? Sellers must relate, or translate, how a technology solution enables enterprises to accomplish their strategic business goals. It must fit the strategy—or strategies—they are trying to implement.
If technology providers want to align with value-based strategy, they need to ask three questions:
For vendors, that is how you create marketing value statements and how you tie your technology to business at the strategic level.
Even with a technology solution that fits their business strategy, you have to convince decision-makers and budget holders to buy and implement the solution. To do this, you must address their specific, persona-based decision criteria that are inevitably based on people and operational models.
There are, broadly, three buying personas that need to be satisfied: the executive buyer, the architect buyer, and the engineer buyer. Each has unique buying decision criteria you must address by translating your solution to meet those terms. Their perspectives are going to be influenced by how they view the impact to the operating model. In my experience, here is how all that shakes out.
Vendors must address each persona’s unique buying decision criteria to convince them to allocate the scarce resource known as budget. You are fighting for that resource and must convince both decision-maker and budget-holder of the operational value of your solution.
Your marketing and sales organizations must be able, and enabled, to translate and bridge the technical aspects of your solution to an organization’s business aspects and buying criteria. And as a seller, you won’t be the only one attempting to do this. You will have competition.
Ultimately, as a provider, the goal is to map your own value-based business strategy onto your buyer’s business strategy and values, through value statements that reflect their needs and provide resources and capabilities to enable them to achieve their desired business outcomes.
You need to know what target you’re aiming at and how you are responding, at a business level, to hit that target. Technology marketing teams are always striving to develop a value statement, but they’re typically thinking in terms of speeds and feeds or technical capability and differentiators. This approach will have limited appeal and “legs” beyond the specific buying persona or decision-maker interested in these aspects.
As a buyer, challenge your providers and sellers to meet your organization’s needs, strategies, and values on your terms. Analyze, review, and judge them on that basis. Why buy, otherwise?
At GigaOm, we work with clients to develop, enable, and activate the value propositions and statements unique to them based on the research we produce, matching users and providers for best technical and operational fit to needs and requirements.
How do you feel about your organization’s resources and capabilities to do this? Would you like some help?
If so, contact us to get started.
The post Machines as Amplifiers: Constructing Value Statements appeared first on Gigaom.
]]>The post All About the GigaOm Radar | Explainer Video appeared first on Gigaom.
]]>Simply put, the GigaOm Radar report is written by engineering leaders to help other engineering leaders make engineering-led decisions.
So many reports we see today are based on market factors such as vendor market share, vendor incumbency, customer share, and so on. But CTOs, VPs of engineering and operations, and data architects all need to know how a product will fit their needs at a technical level. That’s what the GigaOm Radar report sets out to do.
As the video shows, we do rank solutions, just not according to market factors. Rather, we focus on delivery capability, strength of offering across differentiating features and business criteria, and speed of movement according to vendor proactiveness and roadmap.
In addition, there’s no magical place where the best products exist to the detriment of the rest. GigaOm recognizes that some buyers might want a broad platform that covers a wide set of needs reasonably well, and other buyers might want a specific solution to meet a certain need.
For product and service vendors, the GigaOm Radar acts like the solution architect or pre-sales engineer in the (virtual) room, who can discuss end-user needs in more detail and address any technical practicalities. Meanwhile, technology leaders can use the GigaOm Radar as a decision-making framework, which they can apply to their own scenarios.
Fundamentally, the GigaOm Radar is a learning tool to drive purchase decisions. All products have strengths and weaknesses, and the GigaOm Radar makes for more straightforward, facts-based decision-making, optimizing efficiency and reducing deployment risk.
So, settle back, get a beverage of your choice, and dive in. For reference, the video comprises the following:
0:00-4:20—Introduction to GigaOm Radar concepts and purpose
4:20-6:45—Explaining the Leader, Challenger, and Entrant rings
6:45-end—How GigaOm scores and plots solutions, and how prospective customers can build a shortlist
Enjoy! And of course, if you have any questions or feedback, don’t hesitate to get in touch.
The post All About the GigaOm Radar | Explainer Video appeared first on Gigaom.
]]>The post Transformational Training as Lived Experience: 5 Questions for Heather MacDonald, Pluralsight appeared first on Gigaom.
]]>HM: I was in charge of strategy, change management, internal communication, employee engagement, women in tech, workforce of the future, and data analysis for the executive team. I wondered, could I take everything I’ve learned and see how it applied across larger enterprises? I came over to Pluralsight to do this.
My career path has been everything under the sun: construction and retail, restaurants and nonprofits, big and small companies. This allowed me to identify many common patterns across multiple sectors.
Also, I wanted to make workplaces more equitable so everyone would have the opportunities I did. I started at the bottom. My first job was as a construction admin for my dad, who didn’t have the budget to hire a full-time professional. From there, I never stopped learning, never stopped taking on responsibilities, and always showed up so I wouldn’t let down the people who opened doors for me.
On paper, I am not the typical candidate for the job that I’m doing. I don’t have the education, certifications, or time in a Big Four consulting company. What I do have is decades of lived experience, and I think the same can be true for so many other people. They also need that first door to be opened for them, then the understanding of how to open doors for themselves. That’s what I set out to do across the strategies and programs I create.
HM: I mostly work with individual customers to co-create the right solution based on their program maturity and pain points. We look at things like, “What is the strategy for this specific client? What are they hoping to achieve, learning and development-wise, and how does that connect to the business strategy?” Then, we distill that into actionable steps to support the learning and development of their people.
Part of it is sharing broader thought leadership about what these strategies look like and what they are in practice. This could be writing blog posts, presenting on webinars, or hosting workshops both locally and globally.
Then some of the work is internal facing. Because my role touches everything in Pluralsight—I’m collaborating with sales, customer success, product, and other teams—we work together to figure out the best solution for our customers and how to help them achieve it.
HM: What I’ve realized in this job is that no matter which industry they sit in or how big they are, the issues people face are super common and sometimes self-evident. If you’ve worked broadly in business, you can see the bigger landscape.
The problem is that everyone wants a silver bullet. They want Pluralsight to fix 100% of their problems overnight. That won’t work, so we need to work through that. You need to spend time learning about the organization to help the organization learn and improve. I never want to be seen as a consultant who thinks I know better and only gives orders. I want to walk alongside someone on their transformation journey to ensure they can be successful.
It’s like learning to drive. You don’t hand the keys to a Lamborghini to a 16-year-old and say, “Good luck, have fun, and I’ll see you in an hour.” They need to learn the book stuff, then go on the range and drive in a controlled environment. But people want to give you their Lamborghini and say, “Go ahead, figure out my entire company.” Even if you’re a great driver, you don’t just start driving and understand there’s a problem with the alternator, or that you need new tires.
In general, though, we do see patterns repeating. For example, we’ve all worked in places where change strategy starts at the top, and the executives and often senior leaders fully get it; they are bought in. But you hit layers seven, eight, and nine, and those people have no idea why they are here and why they matter. “I’m just a cog in the wheel,” they think, so how can they be bought into company-level change?
From a strategy perspective, it’s about stepping back and saying that if you as a business aren’t working through change management and communications effectively, none of this matters. You’ll never get anywhere if you can’t communicate down, up, and across. You need to create the environment and safety for the changes your organization needs to make.
At the pace of technological change and evolution, we can’t expect any one person to know it all anymore. We have to step back and say it’s more about collaborative and real-time learning and making sure people can fill the needs they have today. That’s where mentorship and practitioner support come in.
JC: Often, new organizations haven’t done the masterclass of business growth; they’re learning on the spot. Meanwhile, bigger companies are not able to change. They’re siloed. It’s less about telling them how to do the stuff they’ve been doing for 20 years and more about helping them understand how to align with the new. We all need that collaborative, transformational stuff. You don’t learn the theory and then suddenly change.
HM: It’s not the fault of executives who have been in business for decades. What worked back then was to go to school, get a degree, get a job, work your way up, and you could afford to buy the house with the white picket fence, drive the nice car, and feed your family on one income. Legacy industry execs, like those in banking, utilities, and telecom, sometimes feel like what worked for them should work for everyone and don’t understand why folks are pushing for more remote work and different benefit options.
With all that has happened in our world, we’re in a time where that plan for career success doesn’t work anymore. You can get a good job and still not be able to buy a house, buy a car, or afford a family. You can go to a top-tier school and still not get a job because you don’t have the experience. We have to honor where you’ve been and acknowledge that if we want to remain competitive and grow, we must make incremental changes.
We can’t expect every executive leader to understand how to navigate a fully hybrid and remote environment. That is challenging for people who are used to doing it one way because that worked for them before. So, how do we support the top layer of executives and leaders to learn the skills and capabilities they need to continue leading companies? We need to set egos and titles aside and realize we’re all learning through this. We need to step back and collectively figure out how the world of work is going to look going forward, and honestly, it’s likely to keep changing over time. Anyone looking for a static way of leading is going to get left behind.
JC: I have to say I am slightly disappointed it’s not old guys smoking cigars and sitting in big leather chairs dictating letters anymore! I was looking forward to that.
HM: Ha! I still encounter people who say, “Could you fax me that agenda?” No, you can open the attachment. It’s one page with three bullet points. “Oh…can you print it?” No, we’re saving trees today. This agenda doesn’t need to be put in a filing cabinet.
HM: I tell people considering our service that we’re not consultants who tell you everything you’re doing was wrong and then disappear. We help you start to make progress toward your transformational change goals. By nature, transformation does not happen overnight. It takes time, effort, and evolution.
We go back to basics and the foundation of OK, you’re trying to upskill a workforce. The disparity between your organization’s least and most technical person is probably massive. So, how do you get everyone on the same page?
It’s not the same for every organization, but you can figure out what will fit most people. Some, who are reasonably skilled and have a decent amount of time, can self-select into a program. Then, figure out the outliers, the people who are super far behind or ahead. What do they need to be doing? It will require different solutions for them.
In cybersecurity training, for example, maybe your warehouse teams need the most attention today because someone clicked on a phishing email and caused a data breach. You need to think about what cybersecurity training looks like for people in a warehouse. What works as cybersecurity training for people in an office setting is not going to be the most applicable, or effective, way to train people in a warehouse or who have roles that aren’t tied to a desk.
Even if you are reasonably technical in your role, that’s no protection. We have to make sure everyone understands that one bad email could take down your entire company. We don’t want people to be scared and paralyzed, but we want them to have a strong enough sense of awareness that they don’t click on the thing that could be a bad link.
Even cybersecurity professionals at the top of their game who have been doing this forever are having to adapt because everything keeps changing. Attacks that happened yesterday are not the attacks that will happen tomorrow. There’s constant anxiety of, “Am I going to be the person that misses the thing that takes down my company?” That group needs a different level of tech skills development support and engagement to ensure we’re not burning out the people who need to be well rested and prepared if things go wrong.
HM: Yes, indeed, it’s the 70-20-10 model for learning. 70% of learning needs to be hands-on and experiential, like labs, job rotations, and stretch assignments. 20% of it should be social learning like mentoring, communities of practice, coaching, or buddy systems. The last 10% is formal learning, videos, books, college courses, and certifications. Formal learning is good for gaining knowledge but doesn’t translate into wisdom until you put it to work.
If you can’t contextualize what you’ve learned, you’re book smart. With that 70%, you can fill the gap between “I learned a thing” versus “I know what this means within the context of my role, my business, the economy, and the world around me.” It’s the difference between learning something and bringing it into your own lived experience.
JC: Thank you so much, Heather!
HM: My pleasure.
The post Transformational Training as Lived Experience: 5 Questions for Heather MacDonald, Pluralsight appeared first on Gigaom.
]]>The post Apple Vision Pro: Unlocking the Potential of Spatial Computing appeared first on Gigaom.
]]>I rarely get the chance to work with technologies that fundamentally disrupt the way we work. Right now, I am using two. The first (and most obvious) is a LLM. Large Language Models are more than an AI bot that can understand human language and respond accordingly. They have the potential to change how we use technology. Rather than a GUI with buttons, menus, and a help button that does anything but help, what if you just talked to your computer and told it what you wanted it to help you do? A keyboard is faster than a keyboard and mouse, and voice is faster than either. But that is a different article.
Right now, I want to focus on Spatial Computing. In the same way ChatGPT enables an entirely new way to interface with your computer, Spatial Computing allows you to integrate technology into your environment, removing the barrier between screens and real life. While that sounds terrible at first, think about how much time you spend focused on a screen. Doubly so for most work, especially remote. Many of us spend eight hours (or more) a day staring at a screen to the exclusion of our environment. You are likely sitting in a chair or standing at a desk at this very second while outside the world moves on without you. Or, by far, one of the most egregious offenses, you might be sitting around a table to break bread with people who are dividing time between others and their devices.
Think also about information security. How often have you accidentally exposed a screen with work data to a complete stranger? Most of us don’t even know. Someone could be standing behind you, reading this very blog right now. Spatial Computing solves this problem in an instant. The screens only exist for you, and they can be as large as you want with 8k resolution. Imagine that: full access to Data, whenever you want, without any intrusion from unwanted eyes.
Fine, but do I need to look like I’m in a sci-fi movie to do it? Well, yes, and no.
Sure, today’s headsets make you look like a cyberpunk Bono. I’ve experienced the sideways glances while wearing the Apple Vision Pro on a recent flight, and I am sure at least one passenger has 50 pictures of me wearing them. That is the price for early adoption. Well, that and the incredibly high retail cost. But things can’t stay that way. Technology will evolve. Prices will come down, and they will do so thanks to those early adopters blazing a trail for the rest to follow. I can’t tell you how excited I am to see where this technology goes from here.
Okay, enough about my excitement for the future; let’s get into the nitty gritty of Spatial Computing now.
What is it?
Spatial Computing integrates technology into everyday life by placing its virtual aspects on top of your vision of the real world, unlike, say, a virtual reality (VR) headset that obscures the real world from view. Google Glass was very good at the “reality” part of the equation but poor at the technology piece (I still have my Google Glass in a box somewhere). The Microsoft HoloLens is an excellent augmented reality (AR) headset but is very expensive and purpose-driven. If you have a HoloLens, then you are either a developer or have a dedicated app paired with it to meet a specific use case. Spatial Computing requires general-use computing as its key use case to be of ubiquitous value.
I talked about the privacy benefits of Spatial Computing, but we need to go further. As a general-use platform, you have all the standard computing potential as use cases, but there are far more. Think about how your entertainment will change when it immerses you in the content. Okay, that sounds like fun, but it’s not applicable to most businesses. What about taking a virtual tour of a house (Zillow being the favorite time waster of daydreamers and dads everywhere)? Top medical schools may one day train students by placing a virtual body on the table in front of them. Overlays can walk you step-by-step through parts replacement using object recognition. Suddenly, you can look at manufacturing lines, identify problems, and receive specific advice to remediate them. Simple quality-of-life improvements like a GPS that paints the directions on the road ahead of you. Or an e-commerce application that uses object recognition to enable customers to focus on a product and immediately see the purchase page in their view. Even my tattoo artist is excited about overlaying original artwork onto the client’s skin to improve the accuracy of the transfer. The possibilities start to get endless fast.
I do not recommend anyone implement the technology today as part of a new project unless you have a clear ROI and intense urgency because it is still early on the hype curve. There are 20 million VR headsets in the hands of people right now; while that sounds like a lot, you will not find much applicable to audiences this second unless you are in entertainment (gaming or video). Nor are you likely to move the purchasers whose headset(s) are collecting dust from lack of a killer use case.
Instead, please take advantage of the time you have, a rare gift in today’s market. You should start thinking about how you can best be in a position to leverage the technology once it hits the inflection of adoption. How can you improve the lives of your employees? How about your customers? I would aim not to be too early or too late. The delta between the two is likely to be several years.
Still, while I’m in a position to make recommendations, I would also urge you to support those in your organization who wish to be pioneers. That does not mean a corporate purchase of the equipment (although that is not a terrible idea). Instead, adopt a BYOD policy for the devices. These people will be your best source of information about the state of the market, what innovations are coming your way, and how the user base has expanded. By supporting the hardware, you can easily see who is using the devices and know who to involve in the feedback loop as your strategy develops.
Navigating Potential Risks
The technology is nowhere near standardized. We are still in the pioneering phase, and the risk of investing time and money into the platform only to find it discontinued due to a market failure is high. At the same time, once that happens, you open yourself to potential disruption by not having a strategy to address the customer’s needs (or wants).
While I believe LLMs will affect all of us, I do not see the same in Spatial Computing. I imagine that somewhere between 20 and 40 percent of organizations have a use case for Spatial Computing. While I think the technology will be ubiquitous in the future, that does not mean every application must transform to meet it. The change may be as simple as reducing the interface and enabling transparency.
Leveraging the Benefits
In the near term, there is a lot of potential to capture an audience. There is no “killer app” for the headsets today, so there is a first-mover advantage for companies with such a use case. The devices are also premium hardware. As such, the owners tend to be those with higher-than-average levels of disposable income, meaning they are also more likely to pay a premium for those first-mover applications. This should help offset the development ROI since the market value of the application will be significantly higher for these customers than the general market.
Assessing Maturity and Readiness
The market is very immature. Act accordingly. Enable your organization to learn as much as they can about the space. If you do not have an obvious use case, then this can be passive for now. It is beneficial to your organization to support the devices in a BYOD plan so you can leverage those early adopters for feedback when the timing is right to activate your strategy.
Strategic Implementation
I fully expect this technology to evolve quickly over the next 24-36 months, and I would keep an eye on the big tech companies to see what they announce. Apple currently has the best hardware and operating system. We should all watch WWDC this year to see what Apple announces and how much stage time the concept receives, as this will be a good indicator of the investment Apple is placing in the space, and the rest of the market will respond accordingly.
Future Outlook
Creatives and futurists have seen this VR and Spatial Computing future coming for a long time. Ready Player One, the novel and film, showed us that future and the ‘metaverse’ it may bring (a term first coined by Neal Stephenson in the 1992 book Snow Crash). This VR tomorrow will erase the division between the virtual and physical as we move between the two. I am excited about the potential of the Apple Vision Pro combined with the Disney Holo Tile floor. Add in some haptic gloves, and we are painfully close to the holodeck of my childhood dreams.
To summarize, devote thought, not action, to this technology. While some first-mover potential exists, most organizations do not need to expend resources here quite yet. Watch the market, specifically Apple, to see if the investments continue growing. Adopt a BYOD policy for Spatial Computing so you can nurture the fast movers in your organization. Finally, subscribe to this blog and GigaOm to stay up to date.
The post Apple Vision Pro: Unlocking the Potential of Spatial Computing appeared first on Gigaom.
]]>