Monday, June 27, 2016

Mobile app on boarding, how to lower the barrier?

The on boarding flow in your app begins when your potential user launches your app for the first time and has to be convinced about the app’s benefits and eventually why he or she should sign up for it. This is the process in which you want to convert your potential user (visitor) into a engaged and active user that repeatingly will use your app.

It will be even better if you can convert your visitor into a user that is referring to your app (ambassador) or a paying user (customer)! To accomplish this the first impression of your app should be impressive from a visual perspective and it should explain what is in it for them.

The on boarding flow of well known apps

Here is how IFTTT does the on boarding in their app. It shows multiple slides explaining you why you should want to use it and what the app is all about. Before you decide to sign up you have a clear impression of what to expect.

If you want to see how other well known apps take care of the on boarding process you can check out the User on board site or the UX Archive.

Many apps, even well known ones, require you to sign up on the first page with little to no explanation what the app is about. That may work well for the Facebook app that almost everybody is familiar with but it is not going to work for your app.

Onboarding patterns

To lower the boarding barrier there are multiple well known patterns that you can chose or combine. Some of these patterns are listed here:

The Introduction approach shows a couple of slides and often require the user to sign up but some apps choose to show the content of the app right away. A Tutorial or a Tour approach show the real app, pointing out some example cases.

A Joy ride approach allows to use the app right away highlighting from time to time features that are new to the user. It is great way of showing what the app is all about but if your app is complex it also may be a little bit overwhelming if you do not this carefully.

A Social sign up allows the user to perform a quick sign up using his Twitter or Facebook account for example. This may be required for the user in order to be able to continue the app but it will lower the barrier if you first show what the app is about and only ask to sign up when needed to proceed.

Late sign up is a concept that is often seen in e-Commerce (and some times in m-Commerce) solutions. Only if you want to check out you have to sign up.

Anothering interesting concept is Continuous on boarding (LinkedIn for example is doing this). We want the barrier to be as low as possible but we also want user profiles to be as large as possible. The concept can be very powerful because it comes with benefits from both worlds. It lowers the barrier (by asking only for the minimum amount of credentials) and it eventually will result in rich user profiles, by encouraging the user to complete his or her profile later.

A known user is more valuable than an unknown one

Probably the best on boarding flow does not require any sign up or login, so you should ask yourself do really want your users to sign up? On the other hand it also true that a known user is more valuable than an anonymous one. In fact we are talking about users versus visitors here. And users eventually become customers while visitors probably never will.

A social sign up has multiple benefits. Not just for the user but also for us, developers. Avoid a lengthy registration process with many fields. The chance that the user will sign up increases and, with the appropriate permissions, you have instantly access to various information of that user, for example an avatar and a name of the user, which is great for personalization options.

Offering a social login could lead to 50% more sign ups. After all, a sign up is only one or two clicks away!

Use Fabric to allow the user of your app to sign up with Twitter or a phone number

There are services that will take away most of the hassle that comes with the implementation of these kind of features, such as the Fabric SDK, which is free to use. The SDK has features for signing up with Twitter but also for a sign up using your phone number, just like WhatsApp is doing.


It is important to keep the on boarding barrier as low as possible and also to make clear from the beginning what is in the app that your user can benefit from.

You can experiment with the different types of on boarding and see what works for your app by obtaining metrics about the conversion and learn from them. And once they are on board (activation) you can focus on optimizing the rest of the flow. Will they keep using the app? (retention) What about referrals and revenue?

As you can see, although intended for SaaS in particular, Dave McClure's start up or pirate metrics can be applied to mobile apps as well. AARRR!

Get the book!

My new book that I am currently working on, together with Arvi Krishnaswamy, is -amongst other things- about the lean startup methodology applied to mobile app development. In this book you can read more about topics such as on boarding, obtaining (actionable) metrics and split testing.

It also comes with some real world examples for both Android and iOS developers. Expect the book to arrive by the end of this year!

Further reading

Tuesday, June 7, 2016

The bots are back in town. Meet Botterik on Slack

Chat bots are around us since the introduction of ELIZA somewhere in the sixties. And that probably was cool and then we forgot about bots. But now they are back and they have become smarter since are connected to the internet and because we now have improved AI algorithms and with that API's for semantic parsing.

Companies such as Apple, Microsoft, Google, Facebook and Slack have rediscovered the chat bots and now they are just everywhere.

Agents (or bots) as Siri, Cortana and Google Now are supposed to have an answer for anything but a new trend shows that there is a market for niche bots. On Facebook there are (mostly B2C) bots for providing information about a particular service or for selling you a particular item. On Slack bots are perfect for automating and delegating business related jobs (B2B or B2E).

Niche bots, connecting services

On a dedicated platform that is already been used for most of the communication a bot will be accepted much faster, reason why this time the bots will stay. They are no longer stupid or annoying (Well, Siri for example can still be pretty stubborn), but they actually make sense and can be great personal assisstents, that are capable of connecting one service to another.

If there is an API for that and you can do some smart semantic parsing then developing your own bot is not hard at all. And if it cannot be done virtually you can outsource any task to the real world, for example using Amazons mechanical Turk API.

Create a Slack bot yourself

If you do not have a Slack account yet you can sign up for free. Creating a bot on Slack is pretty easy and it will take you probably less than ten minutes to get the first one up and running. For the first sample, using BotKit, you also need to have NodeJs installed on your machine.

Create a bot user at The developers section of Slack. Give it a name and click on the Add bot integration button. It will result in an API token that you will need later and you can add some additional information such as an icon for the bot, a first and a last name and a description of what the bot does.


If you are (a little bit) familiar with Node.js and JavaScript you can create your first bot with BotKit. BotKit works with Slack, Facebook messenger and Twilio. Grab it from GitHub or use NPM to get it.

npm install --save botkit

Within the botkit folder you will find the slack_bot.js You can start your first bot with this command

token=your token at stack node slack_bot.js

That is all you need to get the conversation started! You can modify the file to make your bot 'listen' for other commands if you want or you can go through the documentation and examples that come with BotKit.
controller.hears(['what time is it', 'how late is it'], 
    function(bot, message) {
      var topic = message.match[1];
       bot.reply(message, 'It is '+ new Date().getTime()+
       ' milliseconds after January 1, 1970. '+
       'Does that answer your question?');

.NET bot

If you prefer to use the web API instead or if you want to use C# and .NET for your bot then there is a solution for that as well. To get started with developing a Slack bot in C# you can get the .NET SlackAPI project from GitHub.

There is a little bit more work to do here to make it happen as this project, also because it is not as sophisticated as BotKit. First create an app in Slack. At The developers section of Slack go to Your apps and click on the Create app button. Enter a name, a short and long description, select a team and enter some (random for now) url for the instruction link, support link and redirect url fields. Then click on the Create app button.

Slack confirms the creation of the app and when you click on the App Credentials button you will find the OAuth information. You wil need the Client ID and the Client secret information in your .NET app

Visual Studio

Open the SlackAPI solution in Visual Studio, set the SlackAPI.console project as startup project and open the Program.cs file. Here you can modify the values for clientId, clientSecret and redirectUri, just as they appear in your app settings at Slack.

Customize the app a little bit so it will fit your needs. Then run the app and follow the instructions that will read in the console output.

  SlackClient.GetAccessToken((response) => 
     var accessToken = response.access_token;
       Console.WriteLine("Got access token '{0}'...",
       // post...
       var client = new SlackClient(accessToken);
       client.PostMessage(null, "#general", 
         "This is from the app powered by a "+
         ".NET implementation...", 
   }, clientId, clientSecret, redirectUri, code);

In this example we post a message directly into the #general channel but with some extra effort we can do anything in .NET that you can do with BotKit. This could be handy if you have already an existing .NET solution that you want to use with your Slack bot or if you are more familiar with C# and .NET than with NodeJs and JavaScript.


In 2016 bots have become relevant again. With better AI and in a more connected world they have become smarter than ever. With bots dedicated to a particular task and the possibility for any company to create new bots to be used on widely used platforms the adoption of chat bots will be much larger than before. They may not eat the web yet but soon they will...

Further reading

Monday, April 4, 2016

Teamcity and HockeyApp; Delivering your iOS app

TeamCity and Hockey App are awesome tools for creating a daily (or continuous) build and distribution of your iOS app. Most developers will use Xcode to create an ad hoc distribution but the Xcode command line tools are more convenient for this purpose.

For a recent project I have been using Bitbucket, TeamCity and HockeyApp to create a canary build. In addition it is using CocoaPods for dependency management. This approach works well but it takes some time to figure all things out. If something goes wrong it is difficult to find out what went wrong although in most cases somewhere in the large logfiles a clue can be found.

Building an IPA file successfully does not guarantee the signing was successful as well. So the logfile says yes but your testers or early adopters say No. For this reason it is always smart to verify if the app can be installed on a device using Hockey app.

I will tell you what I did to make the four of them cooperate and how you can avoid some of the mistakes that I have made. It goes beyond the purpose of this blog to tell you everything about TeamCity or HockeyApp, so for now I assume you already have installed TeamCity and that you do have a HockeyApp account.


Obtain a distribution certificate

Use the machine (your own MacBook or a dedicated build server) where TeamCity is running to create a distribution certificate in Apples developer portal.

Create app ID, add devices and create an ad hoc provision profile

Unless you have already done so you need to create a new app ID, add some devices and create an ad hoc provision profile, just like you are used to do when creating an ad hoc distribution on your development machine.

Let's create a little script that takes care of dependencies in the Pod file, building an archive and creating an IPA file, using a provision profile file. At the end of the script we will upload the app to Hockey app and clean the archive and IPA file. It is that easy. Well, once you know what is going on, things are easy...

Pod install

If you are using CocoaPods to manage the dependencies in your project (and well, of course you do !) then you need to update them on your build server as well. That makes sense. So the first of the script goes like this:

pod install

Note: If CocoaPods is not installed on your build server then you need to install the gem first.

sudo gem install cocoapods

Build the archive

The command below is what you need for building the archive for a workspace, as is the case if you are using CocoaPods.

Here we will build the sample workspace using the sample scheme. It will create a sample_ad_hoc XCode archive file in the work folder. It is the same thing as when you choose Archive from the menu in the Xcode IDE.

xcodebuild -workspace sample.xcworkspace  
-scheme sample clean archive 
-archivePath /path to your TeamCity work folder/sample_ad_hoc.xcarchive

If you have no clue about schemes (or targets) in your project or workspace. you could use the list command to find out. Execute this command in the directory where your workspace or project reside.

xcodebuild -list

- or -

xcodebuild -workspace sample.xcworkspace -list

The output wil be something like this
Information about project "sample":

    Build Configurations:

    If no build configuration is specified and -scheme is not 
    passed then "Release" is used.


Export the archive

If everything went well an archive file has been created. The next step is to create a distributable IPA file from it. For this we need to have a valid provision profile file that you have created in the developer portal previously.

xcrun xcodebuild -exportArchive -exportPath build/ 
 -archivePath "path to work folder/sample_ad_hoc.xcarchive" 
 exportOptionsPlist exportOptions.plist  
 -exportProvisioningProfile "name of the provisioning profile"

Here you need the provision profile file that you have download from the Apple developer portal. It probably will be convenient to commit the provision profile file to the repository as well.You can also download the provision profile in a different directory on the build server and include the path to it.

Note: The name of the provision profile is the name as you have typed it at the Apple developer portal when creating it (or as it appears in Xcode -if you would have downloaded and installed the provision profile by double clicking on it-). Other than you might have expected it is not the name of the file.

Distribute the IPA file

Verify if a build.ipa file exists in the work folder. If it does you can distribute it to HockeyApp. It requires an app token from HockeyApp and a Hockey app Id. Make sure the app Id is configured for uploading purposes.

Using the curl command you can easily upload the IPA file you have just created. If this is for a daily (or continuous) build you probably do not want to notify all users each time a new version is available. You can use notify=0 for that.

HockeyApp obtains the version Id from the IPA file so you might want to create an auto incremental script later.

curl -F "status=2" -F "notify=0" -F 
 "ipa=@/build.ipa" -H 

Clean up

Very important but easy to forget is to clean up things, just to make sure that Hockey App fails if the build fails as well. Include this in your script to remove both the IPA and the archive:

rm build.ipa
rm -rf //sample_ad_hoc.xcarchive


Finally your script looks more or less like this:

pod install

xcodebuild -workspace sample.xcworkspace  
 -scheme sample clean archive 
 -archivePath /path to your TeamCity work folder/sample_ad_hoc.xcarchive

xcrun xcodebuild -exportArchive -exportPath build/ 
 -archivePath "path to work folder/sample_ad_hoc.xcarchive" 
 exportOptionsPlist exportOptions.plist 
 -exportProvisioningProfile "name of the provisioning profile"

curl -F "status=2" -F "notify=0" -F 
 -H "X-HockeyAppToken:your hockey app app token"
  your hockey app app id/app_versions/upload

rm build.ipa

rm -rf //sample_ad_hoc.xcarchive

You can store the script in a file and call it in a single build step or you can create multiple build steps. With some modifications you can use also the script for Jenkins instead of Teamcity.

These are just the basics for a CI/CD flow and there many things that you could include with a script or additional build steps. What about automated unit testing? Or running Cucumber tests on your build server? That would be fun too!

Further reading

Sunday, January 31, 2016

7 Parse alternatives or Parse It Yourself

Last Friday it was a bit disappointing to find out that Parse will discontinue its services. It used to be my favorite mBaaS as it was perfect for prototyping but also suitable for production data. It is a scalable solution, it has great documentation and is easy to use.

Many mobile developers of iOS and Android apps rely on the Parse backend system for data storage and push notifications. The fact that Facebook has decided to discontinue it is a bit surprising as they did put a lot of effort in supporting Apple TV (tvOS) and Apple watch recently.

What is Parse actually?

Parse is a mobile Backend as a Service (mBaaS) that uses a MongoDB database to store data and Amazon S3 to store files. The Parse SDKs for Android and iOS include handy stuff such as caching and uploading data and files in the background. Other features are analytics, push notifications and cloud code, which is useful for the integration of mail and SMS functionality for example.

If you want to create a new app using a mBaas right now there are some interesting alternatives. But if you do have an app that currently is using the Parse SDK probably your only option is to use the Parse Server. Here is a tutorial to find out what you need to keep your app running on the Parse technology. Or have a look on Github where the source for the Parse Server is hosted.

Some of the features supported by the Parse Server:
  • CRUD operations
  • Schema validation
  • Pointers
  • Users
  • Installations
  • Sessions
  • Roles

Some of the not or not fully supported features:
  • Push notifications
  • Facebook login
  • Web based dashboard

It cannot be too difficult to reanimate most of the Parse functionality using the recently published open source Parse Server. And that is what The distance has planned to do. Well, more or less. They have plans to offer hosting for the Parse Server.

7 Parse alternatives

1. Back4App

Updated Back4app is a new mBaaS for building and hosting Parse APIs. It comes with a migration plan for your existing Parse solutions and it looks very promising.

2. Firebase

Firebase is a scalable real-time backend for your web app.


BAASBOX is an open source backend for your mobile app. It has SDKs for iOS, Android and Javascript.

4. Quickblox

QuickBlox is about building blocks for a backend infrastructure. Offers data storage, push notifications, text and video chat, and many other features.

5. Azure

Microsoft Azure comes with support for push notifications and other mobile services. Since the platform is here to stay you could consider to use Azure to store your new Mongo DB and the Parse server.

6. Backendless

Backendless provides an instant mobile Backend as a Service and overall application development Platform.

7. Pubnub

PubNub is a real-time network that enables software developers to rapidly build and scale real-time apps by providing the cloud infrastructure, connections and key building.


It was only in 2013 that Parse was acquired by Facebook. Using a mBaaS for prototyping purposes is great but you can not (fully) rely on it, even when big names are involved with them or should I say in particular when big names are involved?

We will miss Parse but not for a very long time I guess. There are plenty of alternatives and releasing the Parse Server as open source might come with some new and interesting opportunities.

Further reading

Tuesday, December 15, 2015

Apps on the big screen part III: Debugging on an Android TV

How cool would it be if you can debug your TV app on a real device! For tvOS all it takes is a provision profile and an USB-C to USB-A cable to connect the Apple TV 4 device with your MacBook. How to enable the debugging option on a real Android TV is less obvious. It starts with selecting the right Android TV device.

Android 5.0 set-top boxes are hardly available in my part of the world and getting one from Ali Express is not an option, because the delivery times are too long and I do not have that much patience. So I decided to get myself a TV running on Android.

1. Pick the right Android TV

First of all, if you have plans to develop apps for Android TV and want to be able to debug them then it is important to decide which brand and model you pick. Previously I have made the mistake to chose a Philips TV, the 32PFK6500 to be exactly. It was the only 32 inch model that was available, which made it the perfect development TV, or so I thought.

It turned out to be not such a good idea. There is no way to debug and test your apps on this or other recent Philips Android TVs, basically because of Philips security policy. Yes, you can unlock the developers menu, but you will never be allowed to connect the ADB. This makes debugging impossible.

Too bad! It is a nice TV and I really enjoyed the Ambilight experience. This TV is probably great if you just want to watch TV and use some apps, but it is not really suitable for app development, although there seems to be a workaround to get your app at least tested through the Google Play alpha- and beta distribution mechanism.

Tip of the day: If you want to return your smart TV to the shop then do not forget to restore it to the factory settings. You may have entered your google account and others details that you do want to erase first. Unlike me, do this before you put everything back into the box ;)

So I went back to the shop and exchanged the Philips TV for a Sony Bravia TV, the 43W80xC one. With this television I have made a new attempt; this time with success!

2. Unlock the developers menu

Just as is the case with Android running on a smart phone, you have to unlock the developers menu first. To do so click the home button on the remote control, go to Settings, choose About and scroll down until you see the Build option. Click seven times on it to unlock the developers menu.

Before you continue: Yes. Here comes the disclaimer. I guess there is a good reason for Philips to prevent app debugging and to disallow apps from unknown sources. Use this tutorial at your own risk. My TV did not explode or anything like that, but I am not too sure about yours ;)

3. Enable ADB debugging

The Developers menu will appear under System Preferences. Choose this option and next choose Debugging. Here you can change the setting for ADB debugging to On.

4. Debugging over LAN

I have not found a way for USB debugging yet. It just does not seem to work, although there are multiple USB (2.0) ports available on the device. So lets do this slightly different. It seems that out of the box the Sony Android TV allows debugging by LAN connection directly. Get its IP address and get connected!

Note : Your MacBook (or PC) and the TV have to be on the same network to make the magic happen.

Click on the Home button on the remote control and under the Settings section choose Network settings. Next click on Wi-Fi or Wired LAN, depending on how your TV is connected. Under IP address you will see the TV's IP address.

Open a new terminal window and connect ADB with the IP address (using the default port 5555) you have just found.

adb connect

If everything went well the result will be something like: connected to At this point the Philips TV earlier returned the message Connection refused.

5. Create an Android TV project in Android Studio

In Android Studio create a new project and choose TV as a platform. This will create a ready made media centre app for you, which you can modify if you want to.

6. Launch your app

If the ADB connect command did succeed in the previous step and you run your app in Android Studio the TV will be shown under Connected devices. Select it and click on the OK button.

The first time the Allow USB debugging dialog will popup. Choose Always allow and click on the OK button to continue.

Another dialog that may appear is the one that says Allow Google to regularly check device activity for security problems.... So far I have chosen to decline this but I guess it will not do too much harm if you choose Accept


As you can see it is not that difficult to debug your Android TV app once you have the right equipment and know how to configure things. Having everything up and running the next challenge is to create a real cool TV app.

Further reading

Monday, December 7, 2015

App of the rings: Neyya, Android SDK and BLE

My great friend Wim Wepster gave me this interesting Neyya ring. Fortunately it did not come with a proposal. Instead it was a great opportunity to explore Bluetooth and alternative wearables, such as this ring.

Neyya is a ring that can send gestures such as taps and swipes to a mobile or another device that is supporting BLE. You can use it for presentations or games, although you can just use your mouse, watch or phone for that as well. So I wondered how it works and what could be an interesting use case for it.

Neyya comes with an iOS and Android app but also with an Android SDK. I was having some troubles with it as it was not able to detect my Neyya ring at all. For the time being (as there must be a more elegant solution for it) I have fixed this by modifying the NeyyaBaseService class within the NeyyaAndroidSDK project.

I removed the check to determine whether the Bluetooth device is a Neyya ring or not from the onLeScan method.

 private BluetoothAdapter.LeScanCallback mLeScanCallback =
   new BluetoothAdapter.LeScanCallback() {
     public void onLeScan(final BluetoothDevice device, int rssi,
       byte[] scanRecord) {
         String deviceAddress = 
           device.getAddress().substring(0, 13);

         // if (neyyaMacSeries.equals(deviceAddress)) {
              NeyyaDevice neyyaDevice = new NeyyaDevice(
               device.getName(), device.getAddress());
              if (!mNeyyaDevices.contains(neyyaDevice)) {
                   logd("Device found - " + device.getAddress()+ 
                    " Name - " + device.getName());
I did the same thing for the isNeyyaDevice method, like this.
public boolean isNeyyaDevice(NeyyaDevice device) {
  String deviceAddress = 
    device.getAddress().substring(0, 13);

  /* if (!neyyaMacSeries.equals(deviceAddress)) {
       logd("Not a neyya device");
       mCurrentStatus = STATE_DISCONNECTED;
       return false;
    return true;

Okay, I have to be more careful which Bluetooth device I pick from the list of available devices but at least I am able to continue my journey. Let's connect to the device.

Great! but what is it that we are trying to solve here?

The supported gestures are taps, double and triple taps, swipe left, right, up and down. Now, what problem could this ring solve other than the things that come with the Neyya app already, such as capturing a picture, turning the volume of a device up and down and moving to the next or previous song?

One of the problems that the Neyya ring could help me with is something that most people will recognize. Your best ideas come up when you are not able to write them down immediately, for example while taking a shower or driving a car. In such cases it would be great to just rub your ring to create an audio note. That is easy to implement.

To create a prototype I have modified the ConnectActivity class a little. First in the onReceive method of the BroadCastReceiver implementation I do call a new method actOnGesture

 else if (MyService.BROADCAST_GESTURE.equals(action)) {
   int gesture = intent.getIntExtra(MyService.DATA_GESTURE, 0);


That method goes like this

    private String actOnGesture(int gesture){
        switch (gesture) {
            case Gesture.SWIPE_UP:
                return "SWIPE_UP";
            case Gesture.DOUBLE_TAP:
                return "DOUBLE_TAP";
        return "";

To start recording we will create a MediaRecorder instance and create an output file for it in the m4a format. in the stopRecorder method we will stop the recording.

   private static String mFileName = null;
   private MediaRecorder mRecorder = null;

   private void startRecorder() {
        if (mRecorder!=null){
        String timeStamp = "/" + System.currentTimeMillis() +
        mFileName = Environment.getExternalStorageDirectory().
        mFileName += timeStamp;
        mRecorder = new MediaRecorder();

        try {
        catch (IOException e) {           

    private void stopRecorder() {
        if (mRecorder==null){
        mRecorder = null;

Do not forget to add the right permissions to the AndroidManifest.xml file before you test the app.

    android:name="android.permission.BLUETOOTH" />
    android:name="android.permission.BLUETOOTH_ADMIN" />

As soon as the app is running and the ring is connected, double tap on it to start recording and swipe up to stop the recording. You can use an app, such as the Astro app, to locate and play the audio file that you have recorded.


Later I will release the code of this POC on GitHub. I need to work out the concept a little bit more. Anyway, I am not fully convinced yet whether the Neyya ring is a useful wearable device or not, but by examing the code I have learned something about Android and Bluetooth and I do have a memo recorder that I can use just anywhere now.

Do not forget to take your phone with you as well, wherever you go. Wait for my first shower-voice-memos to arrive ;) Of course you can use the ring plus app as a spy tool if you want or find other purposes for it. My precious...

Further reading

Monday, November 23, 2015

Android Studio 2: Much faster and enhanced testing support

Android 2.0 comes with some great new features. Building and deploying apps will become much faster. The new Instant Run feature for example allows you to quickly see the changes you have made.

The new emulator will run much faster and it will come with enhanced testing support. The emulator will support Google Play Services and phone calls, low battery conditions and GPS locations can be simulated. It will support dragging and dropping APK files, just like Genymotion does.

More speed, that is what Android developers need. And with Android Studio 2.0, which is current available as a preview in the canary channel, speed is what we get.

Enable Instant Run

With Instant Run you build and deploy an app to an emulated or to a physical device just once and then as code needs to be changed, it will only take a few seconds before you can see the changes in the running app.

To see the new stuff for yourself you can grab an Android Studio 2.0 copy from the canary channel and enable the Instant Run feature for your existing apps.

From the Android Studio menu choose Preferences (Android Studio for OSX). In the Preferences dialog expand the Build, execution and deployment option and choose Instant run. You probably need to click on the Update project link to enable this new feature. In my case I also had to update the Build tools and to resync the project.

Once you have done that you are good to go. Run and deploy on the app using the Run button.

While your app is running you can modify your code. For example change the text of a toast being displayed in your app. As a small demo I have modified one of the recipes from my book but you can try this with any app of course.

Now you just hit the Run button again. A toast will be displayed to notify you about the changes. Indeed, we do no longer need to restart the activity to see our changes.

Note! Instant Run is a great feature but it is not (yet) supporting all kinds of changes. Some of these limitations are known, such as changing annotations, static fields or methods. Other kind of changes, such as modifications in the layout, should be supported I guess but I was not able to make it work.

It might because the project that I am using for testing this is having multiple flavors? Or it could be because this is just a preview of Android Studio 2.0 and maybe I need to be a little bit more patient and wait for a more stable release.


Android Studio 2.0 is focused on speed and better testing support. I think that is exactly what Android app developers deserve after struggling so many times with speed (in particular with Eclipse in the old less good days) and with the many fragmentation challenges we still have today.

Just like the Android OS itself Android Studio also has become mature and that is great news!

Further reading