Going to Google I/O this year? We’d love to meet you! AppThwack will be demoing in the developer sandbox Wednesday through Friday on the 3rd floor in the Android section. It’ll be great to put faces with names, answer questions, and talk shop about test automation.
Our demo will cover how you can test your Android, web, or (shhh) iOS apps on over 200 real devices hosted in our state-of-the-art device lab. We’ll show you how reports generate in minutes, as well as how you can run your own tests written in popular mobile automation frameworks like JUnit, Robotium, Calabash, and more. Basically, we’ll show you how to save a bunch of time so you can focus on building great apps.
In addition to the demo we’ll have some cool new shirts and stickers showcasing our logo many of you seem to enjoy (designed by the amazing Lloyd Winters).
We’re super stoked to be invited to Google’s flagship conference! We’re even more stoked to meet you! Founders Pawel and Trent will be on hand Wednesday through Thursday along with the newest Thwackers (just trying it out) to join our team.
P.S. – We’re sponsoring a pre-I/O party on Tuesday, May 14th. It’s likely sold out at this point, but you can find out more at meatup.io. If you like nerding out about Android development and eating Brazilian barbecue this is pretty much the perfect event.
While you’ve been building great apps we’ve been hard at work making AppThwack your one-stop-shop for automated app testing on real devices in the cloud. We’re excited to announce that AppThwack now supports automated testing of Android, iOS, and Web apps. It’s clear that today’s reality involves developing customer experiences that transcend device, platform, OS version, and so on, so we’ve adapted AppThwack to better serve your needs. The speed, flexibility, and intelligent reporting you’ve come to expect has simply been expanded so you can ensure a quality experience for your customers regardless of which platform or device they’ve chosen.
Official iOS Support
The lab is now a little more Apple-y to the tune of 20+ devices and counting. Just as we don’t root Android devices, none of our iOS devices are jailbroken. They’re exactly what your customers are using and provide a mixture of hardware versions, OS versions, and carriers. Check out the device lab for the specifics.
Flexible Automation and Intelligent Reporting
There are many popular automation frameworks for mobile app testing, but no clear winner. In fact, it’s becoming more and more common to use specific frameworks depending on the type of test being automated. To this end we now support JUnit/Robotium, Calabash, and MonkeyTalk for Android and UI Automation, KIF, and Calabash for iOS. We take care of running your tests in parallel, parsing your results for high-level analysis, collecting logs, pulling screenshots, and tracking performance. The result is a consistent and easy-to-understand report that focuses in on the problem areas; we don’t just dump a pile of data on you and expect you to figure out what to do with it.
Fast. Very Fast.
One of our goals has always been to provide a fast and responsive service that saves you hours, if not days of time every test cycle. Our queuing algorithms have been revamped so you’ll get results as soon as they’re ready, even if some of the devices you’ve selected are busy running other customers’ tests. We’re also monitoring device usage and installing duplicates of those devices that are extra busy. You’ll rarely find yourself in a queue waiting for results, whether you select a couple or a couple hundred devices.
New Subscriptions and Passes
After tons of conversations and much thought, we’re moving away from tiering plans on the market popularity of devices like Top 10 and Top 25. We understand your app’s most popular devices may differ from the overall market, so rather than dictate to you what you can test you can now choose from any of the devices in our ever-expanding lab. Check out the new subscriptions on the pricing page.
In addition, we’ve done away with a la carte runs and switched to a more powerful day-pass model. You can buy 1- and 3-day passes with full access to our device lab in the cloud. For those of you looking to test just before a major release this is ideal.
AppThwack in Your Build System
It’s now super easy to kick off nightly tests via our API. Subscriptions Medium and up include the ability to schedule tests automatically through a simple RESTful interface. A detailed post is in the works, but for now check out the API docs for more info.
Just like you, we’re cranking away on new features and better ways to serve our customers. There’s a ton of stuff on the way, and if you have specific requests please don’t hesitate to hit up our ideas page and let us know.
This is a guest post by Ezra Siegel, Apptentive’s VP of Community.
It’s a cutthroat app world out there. Customers expect mobile apps to work perfectly all the time and get easily frustrated when apps freeze, crash, or have any other number of bugs (performance issues). When customers are displeased, 96% of them will leave a negative review in the app store after having frustration with an app.
If you’re an app developer consider yourself lucky if your app only gets deleted after one such issue. Having your app deleted isn’t ideal, but you remain relatively unscathed if that is the only result after a customer experiences a performance issue with your app.
You’re unlucky if the customer decides to leave a negative review in the app store. This has a far wider effect than one customer simply deleting the app off their mobile device and forgetting about it. One negative review has the potential to turn other probable customers away and begins to give your app a bad reputation.
App developers need to take precautionary steps to protect themselves and their hard work from customers who are quick to leave negative reviews after a performance issue. Don’t wait for a negative review to occur, be preemptive to help ensure that as few of those issues happen as possible. Don your armor and follow these two steps to shield yourself from negative reviews.
1. Test, Test, Test, Test, Test, Test, Test, Test
App developers need to test their app, before launching it, on as many different devices as possible. This process is crucial to working out the kinks in the app. The more you test, gather data, and improve the app, the more likely you will avoid problems that could result in negative reviews down the road. Understanding how your app responds on each device is critical to preparing yourself for having a live app out there in the app ecosystem.
Your app needs to work well on every screen size, pixel density, and OS version. You are mistaken (I was) if you think that iOS is simple to test due to fewer devices and operating systems. However, it is becoming increasingly more difficult to test on iOS with each new OS update and hardware version. Many older devices are still in use, and newer ones are often jailbroken which can lead to unique stability issues.
On Android there are literally hundreds of different devices running different operating systems, making it extremely difficult to test across all the devices. You can never have enough coverage of Android devices that you run your automated tests on, and running tests on every device is extremely cumbersome. Having a couple devices is a good start but not enough. Testing on your emulator is less than ideal and if bugs didn’t pop up during development on your test device there is a good chance that the bug simply doesn’t exist on that device and/or operating system. Fortunately, you can take advantage of a service like Appthwack to seamlessly run your automated tests for you on hundreds of devices and provide you with screenshots of the issues and reports for every test.
2. Value your customers by giving them a voice inside your app
Even after hundreds of tests, there is no guarantee that something wrong will never happen. Software on a customer’s phone or other elements that are out of your control as an app developer can cause your app to behave poorly. 99% of people who use mobile apps will take some kind of action when an app doesn’t perform as expected. In these cases having a way to communicate about the problem and engage the customer as transparently as possible is key to keeping your customers happy, even during a crisis.
The first step to stopping the negative feedback to the app store is by providing an alternative place it can go. People often just want their frustrations to be heard. By providing a place inside the app where a customer can leave feedback, voice frustration, or submit a suggestion the likelihood that a negative review is left in the app store decreases severely.
By responding and resolving the issue you are engaging the customer and developing a relationship that will encourage them to continue to use the app. Being able to turn negative reviews into happy customers will greatly improve your app, and help build a community around your app. Customers who had an initial negative experience become some of the greatest evangelists when the time is taken to listen to them and solve their issues.
Building an app’s content and utility is what app deveIopers want to focus on. However, it is important to understand that testing your app pre-launch will save you many headaches later on. Taking steps to keep negative reviews out of the app store is crucial to the success of your app. Creating a place to communicate directly with your customers inside your app will help intercept negative reviews and make customers feel valued. Prepare for problems by eliminating them with testing pre-launch. Be ready for problems that slip through cracks by having a direct channel that customers can use to reach you. Protect your work and make it better by being ready and confident through testing and available when things go wrong.
Ezra Siegel is Apptentive’s VP of Community. Apptentive provides in-app feedback tools for app developers to improve ratings and user experience. He can be found writing on topics ranging from customer service to mobile apps, and most often where the two meet. You can read more along those lines at Apptentive’s blog, and keep up with Ezra and Apptentive on Facebook and Twitter.
This is a guest blog post by Matthew. He’s part of the multi-platform team creating the photo app Woven.
GitHub’s Android app is open source and provides an excellent opportunity to demonstrate how to test using Calabash Android. To get started, I recommend installing Ruby 1.9.3 using rvm and then gem install calabash-android. Finally clone the example repo that contains the apk and test files. You will have to run calabash-android resign com.github.mobile_1.6.1.apk once before the tests will work. Calabash Android expects that the Oracle JDK is installed in addition to the Android SDK and ant.
Install Android 4.2 (API 17), SDK Tools, and SDK Platform-tools using the android command.
The goal is to test that we can log into GitHub and then navigate to Gists, Issues, and Bookmarks. To keep each test independent, we’ll want to reset between each scenario by setting an environment variable.
app_installation_hooks.rb has been modified to use clear_app_data to reset app state instead of uninstalling/installing each time. Some apps crash during uninstall and the process of installing takes a long time so it’s ideal to install exactly once. Note that clear_app_data will remove all accounts on the device, so make sure that’s acceptable before running the tests.
In addition, if the tests fail, we’d like to see a screenshot. The best way to take screenshots is via USB.
The feature file that contains the steps is simple. Before each scenario, the Background is run which logs us into the app. There is one scenario for each test.
To run the feature file with cucumber, each step must be implemented in a step file. For this example, we need three abilities. 1) wait for an activity, 2) set text, 3) touch. The values used in the step file were found with calabash-android console. Wildcard queries such as query '*' are helpful in seeing what’s on the screen and how to access it when using the console.
dumpsys is used to implement wait_for_activity in wait_for_activity.rb. Robotium, which powers the built in wait for activity, does not always know what activity the app is on so we’re avoiding performAction('wait_for_screen', ...).
For setting text, a wrapper method query_wait_set_text is used that tries multiple times to set the text. It’s only considered successful if getText returns the expected text. This avoids the test breaking if setText fails the first time.
For touch a similar wrapper method is used touch_wait to improve robustness. It’s slightly different in that we know, for at least one case, it will fail. This is solved by providing two ways of identifying the username button. "TextView id:'tv_org_name' index:0" works exclusively on Android 3.x+ and Android 2.x requires "ActionBarContainer id:'abs__action_bar_container'" to find the username. The difference is due to how ActionBarSherlock functions on different Android versions.
After running the tests locally, an AppThwack job is scheduled that confirms the tests pass 100% on Android 2 and 4. The individual step results are also available on AppThwack.
Thanks to the calabash-android developers for developing the framework and helping with the code. Thanks to AppThwack for providing an amazing and affordable testing service. I hope you check out calabash-android and submit pull requests to improve the project.
Let’s say you’re working on ironing out that pesky HTC menu issue in your app, and you’d like to know if you’ve succeeded in embracing a life without a menu button. Hard to imagine a better way than verifying on a pile of un-rooted, real devices. To get started, simply upload your app and click the device drop-down. At the bottom you’ll see an entry for selecting devices from a list.
Once selected, you’ll see a dialog containing all of the available devices. In this case we want all HTC devices so we simply filter for HTC and select them all.
We might want to use this selection again, so let’s give it a name so it’ll be available in the future. If you don’t give it a name, that’s fine; the pool will be treated as a temporary pool and not saved for future test runs. Also of note, if you have other users on your projects they’ll have access to the pools you create.
That’s it! Your app will be tested on the selected devices.
You’ll notice a few built-in pools along with some dynamically created ones that appear under certain circumstances. Built-in pools include “All Devices,” which includes every device available with your current plan, followed by “Top 10″ and “Top 25″ if your plan is on a higher tier.
Dynamic pools provide a pool of all devices used in the previous run, assuming there is one in your project, along with a pool containing just the devices that had test failures, again assuming there were failures in the previous run. This makes reproducing and debugging bugs and the subsequent testing of fixes straightforward.
Device pools provide a very effective way to do focused testing between incremental changes in your app. It’s also handy to build pools of devices that correspond to your development cycle, such as smoke and regression. Try it out and let us know what you think, either in the comments or on our support page.
We put a lot of time and thought into how your Android and web apps’ AppThwack results are formatted in our reports. The goal is to summarize hundreds of devices worth of data into a clear format while still allowing you to quickly drill down and get the nitty-gritty details. We also know that sometimes you want to process or organize your results in a certain way. For instance, you might want to grep all of your logs for a specific message or build a custom report for a client containing every screenshot. Our new download feature makes what was before an admittedly difficult task extremely easy.
As soon as the test run is complete you can download an archive containing all logs and screenshots broken down by device. It’s also possible to download the log for a specific device and test from the log viewer.
You can still share an AppThwack report by clicking Share and sending the direct link. Between sharing and exporting results, you’re now able to conduct your own analysis on your Android or web app’s results, as well as get the information to those that need it.
With the explosion in popularity of affordable phones like the Samsung Galaxy Y, Android app performance management and tracking has become even more important. Many of these devices have lower-powered CPUs and limited memory. Regardless of the device, excessive CPU usage leads to battery drain, a sluggish user experience, and eventually irate customers. Lax memory management can lead to out-of-memory exceptions, especially on the aforementioned lower-end devices.
To get into the details of memory management for Android applications, I’ll direct you to Dianne Hockburn (An engineer at Google), who has an excellent answer over on StackOverflow, and Jonathan Norén, who has his own in-depth write-up on his blog. It’s fairly straightforward to gather performance data via adb by using standard utils like procfsfor CPU and thread counts and dumpsys (adb dumpsys meminfo [your package name] ) to get memory data, but interpreting the information and collecting it at the right time can be difficult.
We had a long-standing idea to implement a handy way to track and visualize CPU usage, memory usage, thread counts, and other data, and a few weeks ago we finally sat down and hammered it out. If you’ve run tests since the beginning of January you may have noticed some new charts popping up here and there. Now that the kinks are worked out, let’s explain what that data means and how it can help.
First, an example
Let’s say you see an out-of-memory error in your high-level results.
You then click on the log link to go to the details for that specific test and device.
At the top of the log you’ll see performance data gathered during the duration of the test. Uh oh! Take a look at that spike in thread count. With this extra bit of info, you can now look at the logs, correlate the stack trace to your code and figure out where and why things spun out of control.
Tracking performance data on the test, run, and project level
In our example above, we looked at the performance data for a specific test on a specific device. This is great for isolating specific issues, but what about trending problems?
AppThwack now gathers performance data per device and per test, and we display a summary of those results on the run highlights view. There you can see the CPU, memory, and thread data charted on a scatterplot where each point is a different device, as well as the best (min) and worst (max) devices in each performance category.
Finally, we track performance data on the project level, allowing you to quickly identify emerging trends in performance. The project overview page now contains trends for overall test results (Pass, Warning, Fail), as well as averages, minimums, and maximums for overall CPU usage, memory usage, and thread counts for the past few builds. This makes it really easy to notice a change in, say, memory usage from one build to the next.
What are we tracking and what’s next?
We currently track the percentage of CPU your application is using, how much memory your app is using expressed as PSS (Proportional Set Size – See this excellent article on why PSS is useful), and the number of threads within your process. If there’s a specific piece of data or trend you’d like to see let us know, either in the comments or on the AppThwack support page. Happy testing!
Both US Presidential candidates released Android apps in the past few weeks so we thought we’d dip our foot in the political waters and have their apps duke it out.
We ran both apps through our standard, five-minute automated testing service using the 90+ non-emulated phones and tablets we host in our lab. That means each app was installed, launched, screen-captured in portrait and landscape, randomly hit with UI events, and then uninstalled.
In this corner, Mitt’s VP
The Romney app, Mitt’s VP, focuses on a single task: Announcing Romney’s runningmate. The app was developed by Rockfish Interactive using Adobe AIR and weighs in at just under 12 MB. It’s a statically sized UI with a scaling background, which is a pretty clever way to simulate a responsive design without actually building a responsive design.
Motorola Droid Pro’s 320 x 480 on the left and Samsung Galaxy Tab 2 10.1′s 800 x 1280 on the right.
Now let’s see how it did on the actual tests. Because it inherited AIR’s limitations the app will not run on older ARMv6 devices, meaning the app ran on about 60% of our lab.
The numbers denote test passes, warnings, and failures, not individual devices. Multiple tests run on each device.
Not bad at all. It’s rare that an app has no failures or warnings in any of our tests, so this is quite a feat and sets the bar pretty high.
And in this corner, Obama for America
The Obama app, Obama for America, appears to be the go-to app for the remainder of the campaign. Published by the DNC, it weighs in at 4.5 MB, or about a third of the size of Romney’s. Upon launch the user is greeted by a very, very long EULA. The agreement simply scales across the various form factors.
Motorola Droid Pro’s 320 x 480 on the left and Samsung Galaxy Tab 2 10.1′s 800 x 1280 on the right.
Getting down to business, Obama’s app requires Gingerbread or newer, meaning it ran on about 60% of our lab.
The numbers denote test passes, warnings, and failures, not individual devices. Multiple tests run on each device.
As you can see, Obama’s app has significantly more issues than Romney’s.
First, if the device does not have an Internet connection the process silently dies immediately upon launch. Second, the app fails to install if the Google Apps libraries are not present. This is only a problem for 3rd party ROMs like CyanogenMod or custom versions of Android, like that found on the Kindle Fire. Third, the app consistently has an InflateException on the Samsung Galaxy Y and Kyocera Milano.
We also saw instances of IllegalStateException, IllegalArgumentException, and a few random ANRs. Some of these are straightforward to fix while others (Ugh ANRs!) are a bit more involved.
Bonus Round: Mobile Web
Let’s take a look at how the candidates’ mobile web presence fared across our mobile web layout tester (Currently in beta). It tests a given site in up to six browsers on 20 devices.
Romney’s site on a couple devices. For the full report, click here.
Romney’s site serves up a mobile-specific interface that scales appropriately across all browsers and devices. It’s a surprisingly consistent interface.
Obama’s site on a couple devices. For the full report, click here.
Obama’s site redirects to a splash page on first visit, and because Opera Mini renders remotely it’s always displayed. The splash page is inconsistent across devices. The content is line-wrapped and centered but the size does not scale and the layout does not adapt, meaning the content often goes well below the bottom of the screen.
The main page is a mobile-specific design. On some devices Opera Mobile displays the placeholders either vertically inconsistent from one letter to the next or cuts off the top of the text. The site is also significantly heavier in terms of assets and many screen-shots show floating letters and “[X]” glyphs while custom fonts are loading.
Both sites had security certificate warnings on older devices running 1.6, but given the extremely low number of such devices still in operation we don’t consider it a negative.
Politics aside, when it comes to both native Android apps and mobile web presence it’s clear Romney has the upper hand, at least for now. As we mentioned above, tests were performed in about five minutes per app using our default automated tests. With specific, targeted testing many more problems could presumably be uncovered.
It will be interesting to see how the campaigns continue to utilize mobile technology and how well they do it from a technical perspective.
One often-requested feature we’ve heard loud and clear at AppThwack is the ability to publish reports so others can view them without logging in. Maybe you’d like to share the report with a teammate, a boss, or just brag about your results on G+ or Twitter. We’re happy to announce it’s now super easy to mark a report as public and start sharing.
When viewing a run you’ll see a slight shift in the interface and a new “Share” button.
Clicking it will bring up a small window where you can toggle the public availability of the report. Once shared, a link is generated that you can then send it to anyone you’d like.
Visiting that link allows for full navigation of the report (Don’t worry, nothing beyond the specifically shared report is accessible). This means no more copy+pasting results or requiring others to sign up just to see a report.
To unshare, simply bring up the window again and click “Unshare.”
Enjoy, and as always let us know how we can continue to improve by sending us email or submitting/voting on issues on our support page.
Since the beginning AppThwack has always launched an app and taken screenshots in portrait and landscape. Until now this meant seeing a splash screen, loading screen, or a home screen that although useful, was admittedly limited.
Today we’re excited to announce a new feature that allows the capture of screenshots at any time on any screen. The latest version (3.3) of Robotium introduced a useful new method, takeScreenshot,that simply captures a screenshot and stores it to the SD card. We’ve taken this cool feature to a new level by integrating Robotium screenshots into AppThwack’s results view.
Simply make a call to takeScreenshot in your Robotium test to have it appear in your AppThwack report. It’s super easy and with AppThwack’s help even easier to view the results than if you were to use Robotium alone.
Just like other AppThwack screenshots they are clickable and bring up full-size, pixel-to-pixel images. They are also displayed in our high level screenshots view so you can see every screenshot in one place. Enjoy!