Gigaom Research FAQS

Frequently asked questions when participating in GigaOm Research

Planning

First we send a research proposal, which you/we can use to assess whether you are an appropriate fit for the report. We also send an intake form, containing the critera we are using to evaluate, so you can respond specifically to these. We then ask for a briefing/demo between our report author and your team – note that this is to look at the product in more detail, not understand the company. We then write up the findings and send them to you for fact checking prior to publication. Overall, the process takes 8-12 weeks.

Yes, please contact us to find out more about our methodology and research processes.

Typically, you will receive correspondence for the factual review about 3 to 4 weeks after the briefings have wrapped up.In this way, you can easily increase your engagement with the customers and turn them into loyal customers.

We will work with our sales and analyst teams to identify other reports your company is featured in, and will endeavor to share approximate timelines for fact check.

Reports typically publish 2-4 weeks after the fact check deadline.

Process

Our research proposals set out the key criteria we will be using to judge and rank solutions for the Radar share it with you in your original invitation to join the research.

Customer references are only for our records and we do not use customer names in the report. We ask for the completion of a questionnaire, then the analyst contacts customers directly via email and then sets up a call. It is usually a 30 minute meeting where we’ll cover our criteria and why they chose your company, how implementation went, and what their experience was of your solution.

We are schedule and category-driven when it comes to research. In an ideal world, a vendor would approach us say “We’re in the XDR space”, we’d say, “Ah cool, we’re going to be researching that next quarter.” We then make a note in our database to include the vendor name in the research proposal, such that the analyst can review. The RP also contains table stakes – these determine whether a vendor is in or out of a Radar so once you’re logged, you’re in the system. But clearly we’d rather have a report-specific briefing than a general briefing when a report isn’t being written. 

In general, we will create multiple radars for two situations, where teh vendor placements woudl be significantly different from a “general” radar. These are (1) per market segment (usually Small, Medium, Large enterprise), or (2) per use case — for example, for Unified Data Management, we created one for business-focused, and another for infrastructure-focused solutions. The rationale for creating multiple radars will be explained in the Research proposal for the given report(s).

At that point, we’ll want to know about the vendor anyway, even if they’re small. Lead analysts are following market evolution and assessing what makes sense, for example whether a new category is emerging. If we feel there’s sufficient merit, we might write a Sonar report with multiple vendors, or a Solution Brief if the vendor is in a market of one. Both Radars and Sonars can be licensed, and we may choose to expedite either if a vendor is keen to license it (just as an enterprise end-user might request a roadmap feature by paying in advance) — though obviously, there’s no bearing on the vendor’s final position.

Largely the answer is down to table stakes, i.e. the set of things a tech solution category does. Relevance is defined by functionality, not market presence. In this way we’re following the same steps as a technology buyer and creating a level playing field (as best we can) for evaluation. In some cases, particulary if a large number of vendors exists, we can create additional inclusion criteria, for example from use cases and/or for a specific subset of customers. We document these in the research proposal before we contact vendors.

As part of our fact check process, we provide each vendor a link to our actual report with the capsules for all the other vendors removed. Vendors are free to mark up anything they see as inaccurate, or add any information they think necessary to support their case. Specific points can be folllowed up by email but we look to address all vendor questions or concerns.

Briefings and presentations

The main point of the demo is to see the look and feel of the solution, and to understand how it might be deployed. Some of our analysts look to run evaluation copies of the software so they can test the solutions for themselves. If we are looking to evaluate ease of use or graphical creation of workflows for example, demos help prove these points.

Our analysts are interested in how the solution reflects the key criteria set out in the research proposal – this focuses on the needs of users. They want to see functionality, usability, and understand how the technology is designed and deployed.

This is up to the vendor, but we suggest it focus on the real-world application of the tools and the technology which underpins it.

Yes, as long as it is agreed before the call. Please note that we record briefing calls so we can reference them later in our research.

Time pressures can make this a challenge, and will be at the discretion of the analyst.

Scoring

Given how solution categories are constantly commoditising, the general rule for scoring is the principle of a threshold. If a vendor solution delivers on a feature or quality, then they hit the threshold and are scored as ++ accordingly. If the vendor delivers better than other vendors in some way, they can get +++ and the analyst should document the reason why. Similarly, if the solution is lacking compared to other vendors, they are awarded + and the analyst should also document why. In this way, we can take account of what might have been a differentiating solution one year, but a year later, has become something all vendors provide.

Technology is constantly commoditizing, and what was once a leading-edge feature will become table stakes over time. As such we grade vendors on how they rank relative to each other at time of writing. Should a vendor deliver on a feature or metric as well as the majority of other vendors, they will be awarded ++, that is, they are hitting the norm. If they do something significantly better, we will award +++ but state what is the differentiator. Similarly, we can dock a + if a feature or metric is less good than other vendor offerings, but we will say why.

We recognize markets are constantly commoditizing. This means that we are looking for deviation from what is seen as the current norm – in this case, we’ll start by assuming everyone is a fast mover, then we’ll look for those that are ahead of the pack (outperformers) or behind (forward movers). The consequence of this model is that, if we then say why they are ahead/behind, this is a useful cross-check of the vendor description. Internally we talk about defensibility, i.e. we always have to be able to say why a vendor solution is in the position it is in.

We rate vendor solutions based on technology rather than company size, sales numbers or incumbency. The Radar position is dictated by aggregating the scores and positioning of a vendor solution, including the plus system, speed and direction of travel, and whether the vendor is a platform or feature player. Please see other questions to understand how these scores are measured individually.

When a vendor rates their solution on a scale of 1 to 3 (3 being best) they are not rating their solution compared to other vendors/solutions, but instead how well they think they are executing on that specific key criteria, eval metric, etc… The questionnaire is meant to give the analyst insight into the vendor/solutions strengths, challenges and unique capabilities.

Governance

Our analysts come from a broad pool of practitioners, which are representative of the industry as a whole. Our methodology and peer review process makes sure that our Radar reports are as objective and independent as possible, and we offer vendors an opportunity to check what we have written prior to publication. Should an analyst come from a specific vendor, we can define a gap before which the analyst cannot write about the vendor.

Our Radar report process reflects the steps that an organization would go through as it evaluates solutions to a given problem or opportunity. This means not including a vendor just because they don’t want to be, doesn’t make sense. If a vendor fits the table stakes and falls within the inclusion criteria, we will include them as doing otherwise would not fit the ethos of the report. This being said, we welcome a vendor’s views on why it might be inappropriate to include them – for example, the vendor may have a similar set of capabilities as the table stakes, but it does not address the market need/scenario set out in the report; or it may be about to drop the product/service concerned. While we can let the analyst know that you would prefer to not appear in this report, they ultimately make the call.

A vendor’s solution will be included in the Fact Check for a report if they have met the required Table Stakes for a research category. This is to reflect the same process that will take place at an end-user organization. A vendor may make a request for removal, for example if the vendor does not feel it qualifies to be in the report. Such requests will be considered by the analyst on a case by case basis.

Gigaom does not denote whether vendors have or have not participated in our research.

Sales

There is no cost associated with being included in this report, only if you’re interested in leveraging reprint rights. For further information on our pricing model for licensing, please contact our sales team.

There’s no such thing as a “client-sponsored sonar” at GigaOm. Radar and Sonar reports are in our syndicated content stream, which means we write them independently, and then Sales get to see them and reach out for reprint rights and licensing opportunities. We do also have the option for a vendor to license in advance, all this means is that we’ll bring the report forward a quarter or two (vendors do similar with roadmap features). We’ve done this with a handful of Radars and Sonars — the creation process stays the same, and fully independent of Sales. Of course, the sponsoring vendor may believe that they will do well in tech rankings, but we say to them up front that it might not be the case!

There is no cost associated with being included in this report. If you are interested in leveraging reprint rights after the report is completed, we can put you in touch with our sales team.

If you are interested in leveraging reprint rights after the report is completed, you can contact our sales team for more informarion.

If your vendor is interested in being included in a Radar report, please do let us know. While we generally only evaluate specific categories once a year, we can let the analyst team know that the vendor is interested in being considered for our research, and when we next refresh the report our team will reach out if the vendor meets the inclusion criteria for the report at that time. If the report is currently underway, we will of course consider inclusion during the process, subject to delivery deadlines.

Unfortunately, we can’t share the full report with a vendor until they purchase the reprint rights. Even though this wouldn’t be shown externally, it still contains competitive information that a vendor can benefit from, and would be unfair to the other vendors represented in the report.

Product

The Sonar report is a new and simpler product to the Radar report. It exists for emerging technologies when there isn’t enough information/vendors for a radar yet. It’s a much shorter report, combining the main elements of Key Criteria and Radar.

gigaom-logo-dark

Have more questions? Contact us!