Input sound file:
10-17 pm Session 1_1006.MP3
Session Name: The Evolution of Mobile Development Tools
Speaker: Jay Srinivasan
Up next, before we hit the break, we’re going to have a presentation. The Evolution of Mobile Development Tools and it’s going to be Jay Srinivasan – he’s the Co-Founder and the CEO of Apprify. Please welcome Jay to the floor.
Jay Srinivasan 00:18
Hi, I’m Jay Srinivasan, I’m the Co-Founder and CEO of Appurify. If you look at your Conference Program, I’m supposed to be here on stage with my Co-Founder and CTO, Manish Lachwani. Given that we only have 10 minutes, I want to make it’s as sufficient as possible. Thank you for the Conference Organizers for this opportunity. What I wanted to talk about is how we see Mobile Tools evolving or more specifically what companies can do to launch better apps. I’m going to make a fairly strong statement here, but what we strongly believe is that mobile application development today is broken. Especially if you think about it, in comparison with the Web World and the PC World, you have silo development, silo testing and sort of ad hoc release. There’s very little continuous integration. There’s very little automated testing. This is partially due to the fact that there really aren’t very good tools out there for testing, debugging and optimization of mobile apps. What this causes, is essentially inefficient launches. So you launch our app, there’s stability issues, there’s network issues, the app is slow, it’s laggy and essentially what happens is you’re almost relying on your users to find the problems for you after launch.
Jay Srinivasan 01:32
The analogy that I always think about is how comfortable would Amazon or Google or someone be with their website crashing every 10 times you use it and having your customers find your issues for you and then fixing it in the field. What we believe strongly is that mobile needs to think about their development process in the exact same way as the World Web. You don’t want your customers finding issues because what’s going to happen is you’re going to have poor ratings and you’re going to lose revenue. This isn’t just a theory, you can actually see this in the field every day. So if you look at this graph, what you see on the top is two very similar applications that launched around the same time and I’ll just indicate their names. You’ll see that both of them had similar marketing budgets, so they’re a bunch of users at the beginning – on day zero. Application two, continue to do well. Application one didn’t do so well. The problem is, if you don’t think about stability, performance and quality, what’s going to happen is you’re going to get lower attention and you’re going to get lower ratings.
Jay Srinivasan 02:29
It’s going to translate to fewer user, fewer newer installs and you are going to lose revenue. The point here is, these applications are very similar – it was the same publisher, it was the same amount of marketing spent, same amount of development spent. The difference was, the quality process prior to launch. And just one quick point here, it’s not just about your first launch – every time you push an update, that’s a new launch. The good news is, there are companies out there that are trying to solve this. Given that I’m up on stage, I’m going to plug for my company, Appurify. We’re 20% engineering at start up, around a year and a half old, backed by some fantastic investors. Our mission is to help solve this issue of mobile application quality before launch, but also after launch. I only have six minutes left and I want to make sure that I actually contribute something to this audience. If you take away nothing else from these, please take away these four points. We’ll walk in to probably something like 1,500 developers over the last year and a half and what we’ve realized that there are four things that you can do to launch better app and to ensure better revenue.
Jay Srinivasan 03:31
Number one, test and optimize on real devices. It’s not enough to do simulators and it’s not enough to do emulators – you need to think about real devices. The thing is it’s not just real devices it’s also the user conditions. Mobile’s awesome, that you can use it anywhere, but at the same time, you’re now at the mercy of the network. You’re at the mercy of how much memory is available on your device. You’re at the mercy of the fingerprint scanner on your device. So it’s not just real devices it’s real user conditions. Number three, do not think about mobile testing as a manual process, you have to automate it somehow. Finally, it’s not just enough for your app to work properly. You need to make sure that it’s stable. You need to make sure that it responds udder different conditions. You need to make sure it doesn’t get hot – it’s not a battery hog. So just diving to each of these in a little bit more detail, number one, you’ve got to test and optimize on real devices. That’s what I’m saying, your users are on real devices and so you need to be testing and optimizing your app on real devices too.
Jay Srinivasan 04:23
What you see on the left, is what Appurify does to address this issue. We have a data center with 1,000 IOS and Android devices. We literally provide a full fledged form of devices for you to actually be able to test your app on. So that your customers aren’t finding your issues for you, instead, you’re finding it on the real devices. There’s stuff like crashes, memory issues and network issues that you’re only going to be able to capture on real a device. Number one, make sure you’re on real devices. Second thing is it’s not just the devices it’s the user conditions too. So if you look at this graph – going back to these two applications – one, that did really well and one that didn’t. The main difference was how they perform on different networks. So what this graph is showing, is essentially user engagement in the first five minutes of the, for folks that had an iPhone on Verizon and folks that had an iPhone on AT&T. As you can see, between steps three and four, there’s a really steep drop. When we went and debugged this, we realized that anybody on AT&T – three bars are below and we’d see the app crash because there was a network call that it would timeout.
Jay Srinivasan 05:21
So you spent all these money building and marketing this app and then you didn’t test it on a poor network condition – and literally the first time anyone on an AT&T iPhone got the app, it crashed. You need to really think about real user conditions. By the way, at Appurify, what we do is our devices allows you to change carrier, signal strength, memory, location – everything. Now, if you’re testing on real devices and you’re testing on every type of user conditions, you’re just going to get in, in data with combinations. There’s just too many devices and operating systems out there. Everybody talks about Android fragmentation, but think about it, even IOS, there’s more than 40 combinations of operating system and device type. Couple that with the user conditions, you’re not going to be able to manually test every combination – you’re going to miss stuff and your customers are going to find it for you. The good news here is that, there are many open-source frameworks that are now coming out for automation in mobile. A lot of them are still quite naissant, but what we’re finding is that there’s more adoptions – they’re getting more stable and they’re getting more robust.
Jay Srinivasan 06:20
By the way, to plug for Appurify here is, if you do have an automation framework, we run those automation frameworks on real devices. So, we’ve talked about real devices, we’ve talked about real user conditions and we’ve talked about automation. Finally, the point that I want to end with is, it’s not just functionality. People talk about functionality as the main thing that you’re testing when you’re playing around with your app, but you find that other issues are also really important. So if you look at that pie chart, what we did was we went to the App Store and read every single one-star review for the top 200 apps in the IOS Apps Store. We started bucketing them into different categories. The blue slice on the left, those are nonperformance-related issues, that’s like someone’s saying, “I don’t like this game. I don’t use this bank. This app isn’t useful for me.” Those are nonperformance issues, but what you find is that 52% of the time that someone gave a really bad review in the IOS App Store and it has to do with crashes or network performance or client-site performance.
Jay Srinivasan 07:27
So it’s not just testing, but functionality. You want to also make sure that users don’t see crashes in the field – users shouldn’t see bugs. When you’re on a poor network, your app should load or it should handle it gracefully. Battery consumption shouldn’t be a major issue and your phone shouldn’t get really hot. All of these need to be tested and optimized before launch. If you don’t do that effectively, what’s going to happen is, they’re going to leave poor reviews. If you go back to that first slide, what you see here is, every time you have poor ratings and poor reviews, your users are going to stop using your app. New downloads aren’t going to happen and you’re losing revenue. If anyone have any questions for more information, we are absolutely passionate about this space. We think mobile development is at its infancy. We think we’re going to see some fantastic tools that will emerge over the next few years. Essentially, all the learning’s that people had in the PC World and all the learning’s that people had in the Web World are going to be applied to mobile. We’d love to answer any questions, so feel free to contact us. Thank you.
Time for an afternoon break. Go outside and get some refreshments. I don’t know if we had ice cream yesterday, maybe there’s ice cream today. Go get your favorites before it’s gone. Network, meet with each other and stop by the gig on Research Booth. Find out what they are doing because it’s pretty awesome. Get your snacks and General Sessions will resume at 4:10. Get your nominations in for King or Queen of Mobilize. See at 4:10 everybody.