Friday, 5 April 2019

Mysteries of Apple Bluetooth accesories

17:45 Posted by G No comments
I've now moved fully onto Office 365, so have converted purely to Mac world, and just occasionally dip into Citrix for a few enterprise apps.

So I'm not starting to see some idiosyncrasies between 365 on Mac and PC, most of which are well documented, but it is quite annoying to come across these messages from the 365 Mac dev team


Anyway the point of this post was I've been having trouble with a wireless mighty mouse and apple wireless keyboard working together with my 2018 MacBook Pro.  I was having what I thought was interference between the bluetooth accessories. But having done some searching I found out two things I didn't know. This manifested in the mouse moving in a very jerky fashion and the keyboard being unresponsive and then typing some keys multiple times.

Firstly like WiFi you can see advanced Bluetooth details by holding Shift and Option while clicking the bluetooth icon in the menu bar.  This will show advanced options show more details as well as the debug options. (for WiFi it's just option and click WiFi)

Secondly which I think is the root of the problem I was having is this tip :

" With some time and experimentation, I have learned the problem is triggered when I wake the computer from sleep mode using the keyboard.  It appears the keyboard wakes up repeating the keypress.  The only way to stop this is to power off the keyboard and power it on again.

The work around is to wake up the computer using either the trackpad or mouse.  Don't touch the keyboard!"

thanks to WheelMcCoy on the Apple forums for the tip - https://discussions.apple.com/thread/7555728

Finally this was happening on OSX 10.13 (High Sierra) but I've just upgraded to 10.14 (Mojave) and haven't seen the problem so far. 

Friday, 14 September 2018

Listen to this man

11:57 Posted by G No comments

Bit of a grand title I know...

I was lucky enough to hear Professor Ed Hess talk at an LEF (Leading Edge Forum) event a couple of years ago.  He talked about lots of things I've never really thought hard about, but immediately made perfect sense to me.

At the time he had just published his book called Learn or Die, which is well worth a read.  It talks about developing a learning mindset, and about being much more open to other peoples ideas, concepts such as 'I am not my ideas', 'my mental models are not reality' (aka I know I have biases).  He also talked a lot about make up of team, and how much diversity of thought positively impacts a team's performance.

I still follow the LEF guys ( twitter feed ) and also Ed Hess (Twitter Feed) and saw that Prof. Hess was speaking at the LEF study tour.

There's an LEF article (Link) called 'Rethinking Human Excellence with Ed Hess' which is worth a read, but if you've got 10 minutes watch the video below


He talks about how the operational models we developed in the industrial revolution (operational excellence, low failure rates, efficiency, command and control etc.) are not suitable for the smart machine age of software running everything/AI/ML.  He believes that :

Operational excellence will be taken over by technology and will become table stakes

his view is that we will need to change our approach and focus on what humans does better than machines.

You cannot command and control human beings to be innovative

You cannot command and control human being to be emotionally intelligent

I think he talks a lot of sense about how we need to change as leaders, being better versions of ourselves, continually learning, hiring for mindset and behaviours and this will be absolutely crucial in the years to come.

Some of this thinking is very similar to some of the thinking another I read a while ago  General Stanley McChrystal's Team of Teams, which also proposes changing the structure of organisation away from command and control to teams of teams - I wrote a bit about this here


Tuesday, 23 January 2018

Fresh Sophos home for Mac install via TeamViewer Gotcha

17:17 Posted by G No comments
After much pulling of hair, and thinking what am I missing, it turns out that Apple have changed permissions on remote installs in OSX 10.13 (High Sierra) of some software that install kexts .

Anyway I'm sure as anyone who happens to have IT in their job title knows, one becomes the default tech support for all our family.  This is exacerbated by Christmas 'could you just have a quick look at...'

Anyway I was checking the father-in-laws Mac, and to save time I was using Teamviewer so I could do it from the comfort of my own home.  All was going well, I downloaded Sophos home for Mac, and got to the final stage of the install, and needed to 'apply' a setting in system preferences.

I could see it via team viewer, but whatever I tried I couldn't click it !

After many different approaches, like most men I finally reverted to RTFM and found the following on the Sophos website - https://community.sophos.com/kb/en-us/127413

Here's the text from the advisory:

Due to a new security mechanism that Apple has released with MacOS 10.13, called Secure Kernel Extension Loading (SKEL), all non-Apple kernel extension (what we use to intercept files, etc) vendors must be manually added to a trusted list (Any user can add this). This allows the kernel extensions to load and is required for Sophos Anti-Virus to function properly. All 3rd party vendors are impacted by this change, and it is not possible to work around this requirement.
Note: Due to an Apple security restriction, this cannot be done via a remote desktop connection. There must be a locally logged on user. The Allow button will show, but be grayed out if it is accessed via remote desktop.
  1. After installing Sophos Anti-Virus got to Security & Preferences in the Apple System Preferences window.
  2. Near the bottom of the window, it will list the blocked Kernel Extensions (kexts) by Sophos. Click Allow.
Once authorized, all future Sophos kernel extensions are allowed, even after uninstallation.  This step is not needed again on a reinstall. Kernel extensions already installed during an upgrade from MacOS 10.12 are automatically authorized.
So after a quick call to the father-in-law and him pressing 'Apply' locally at the appropriate moment, all is good.  Hope this saves you some time and heartache fellow family IT support !

Patching cadence becomes a thing

16:05 Posted by G No comments
I recently wrote on the Hiscox London Market Blog (with the help of the excellent Simon Challis) about the Meltdown / Spectre vulnerabilities in CPU's (article is here). Two immediate things, firstly it shouldn't be a popularity contest for which bugs have the nicest logo and website and secondly most of my thoughts are reflections of Kevin Beaumont, who I think is one of the most incisive commentators on IT security (and suitably irreverent at the same time).

image from Corax website

So what's the big deal ? Well nothing really. These vulnerabilities have the potential to be a really big deal, but at the moment, they're just not. That doesn't by any mean you should rest easy, and certainly after careful testing you should apply (if safe to do so) all the relevant patches.

What interests me most is how this is changing patching from a boring but necessary (and often neglected) back office task into something that the board, and soon investors will be taking notice of. The race is on currently, can enterprises patch before malware authors come up with a remote way to exploit these newly discovered vulnerabilities.

What makes this even more interesting is the fact that it's now easier to watch this battle from the sidelines. A new industry has sprung up to measure cyber risks.

Both the pure risk scoring players such as Bitsight and FICO, but also a new breed of insurance startups premised on cyber risk scoring and aggregation, such as Cyence (recently acquired by GuideWire), CyberCube (recently spun off by Symantec) and Corax.

How long before the board asks for their own risk score, or during M&A discussions a company's risk score is one element that is considered before financial investment ?

If you're interested in learning more about Cyber insurance, here's a link to a BBC article in which I'm quoted here

Friday, 13 October 2017

Office DDE – How a zero day exploit evolves

12:28 Posted by G No comments
I’ve been following Kevin Beaumont on Twitter for a while, he’s a security architect from Liverpool, and quite a regular blogger.

On Tuesday lunchtime I read a tweet from him :

Having read the article at Sensepost, they have discovered a way to run code (what is called in IT security nerd circles as ‘RCE – Remote Code Execution’) in Office documents without the use of Macros.  Macros are a traditional way of getting malware to run, but companies often block macros, and some anti-virus services are configured to remove them, so not a reliable way to get malware on your victim’s machine.  Using this DDE feature is a newer and easier way to potentially deliver malware.

Clearly this is a big deal.  Reading the full post from Sensepost, they have reported it to Microsoft and Microsoft have said this is expected behaviour and therefore they won’t be patching it.  This means that customers are vulnerable to this attack vector and will have to find another way to protect themselves.

The main point of this article is to illustrate how quickly things move and this threat evolves.  From the Sensepost article no anti-virus spotted this as a suspicious file.

Beaumont then goes on to test this himself, being able to create a word document that will start the calculator program from it, and showing that none of the malware protection running on his machine detect this exploit. This is now 6pm on Tuesday the 10th of October

By 7.30 that evening, one of the AV vendors has started to identify this type of file as malicious.

By 1am, Beaumont has discovered a word document that uses the DDE vulnerability to start Internet Explorer and open a website where the malicious code is stored.  What’s more interesting is that the site where the malware is stored is a US Government website (now shut down)

Then by 8am on the 11th, here’s a copy of the email which has the DDE Vulnerability embedded in it.

Finally at 5pm on Wednesday, there’s a write-up from Talos (part of Cisco) about the whole malware chain

What’s also interesting is that the hackers use DNS to exfiltrate data, which is quite an esoteric way of doing it, but most companies won’t spot it as DNS is a perfectly legitimate service to have running.


If you managed to follow this to the end, this is clearly a very sophisticated hacking attempt, here’s the core elements of campaign :
1. Use of DDE exploit, which is not commonly known, and won’t be patched by Microsoft
2. Lack of Anti-virus firms picking up this attack vector
3. Use of legit sounding emails (Purporting to be from the SEC relating to EDGAR (company filing system in the US)
4. Malware is downloaded from a legit US Government website

This all goes to show how sophisticated attackers are, and how important it is to stay vigilant

In subsequent blog post Beaumont goes on the make the point that Microsoft will have to do something about this, as it is so difficult to protect against.  His suggestion which makes good sense to me is to disable DDE by default, and enable it via a registry key.



Wednesday, 21 June 2017

Prodrive, Le Mans 2017 and Social Media done right

10:56 Posted by G No comments
For the first time in a few years, I managed to watch a reasonable amount of the the Le Mans coverage this year.  I love a bit of endurance racing, and clearly to us Europeans Le Mans is the biggest race of the year.  I've still never been, something I need to rectify in the very near future !

I missed the actual finish, but via a few friends on social media I was aware of how exciting the finish was in the GTE pro class (spoiler alert) with the Covette and the Aston battling it out right to the very end.  I find it astonishing that after 23 hours 55 minutes and 2,800 miles of hard racing the cars were less than a second apart.

I saw this morning on Prodrive's Facebook page they'd posted the last 5 minutes of the race, plus some post race celebrations.  It's makes for great watching (especially if you're an Aston fan!).



What I really liked though is the interaction between the Prodrive team and the punters on Facebook. Clearly it must have been gut wrenching for the unfortunate Covette team, but I like the comment from the Prodrive team saying that they went and spoke to the Covette team afterwards.  I like that it's a two way conversation with the Facebook followers, and also that even in this highly competitive, big budget world the team still have the humility to go and talk to their competitors

Thats the spirit that all racing should aim to emulate, F1 has much to learn both from a racing spectacle as well as fan interaction.

Here's the video :


Monday, 5 June 2017

Why BA should care about IT

20:42 Posted by G 1 comment


Having been in the eye of the storm of the BA IT systems failure last weekend, and only getting away on holiday, 2 days after we should have, I think there’s lots of things to learn.


I think what most struck me about the outage was the sheer size of it.  Upon arriving at Heathrow terminal 5 on Saturday morning with the extended family all excited about a week’s holiday in Greece, we were met with huge queues outside T5, and at that stage it looked like a baggage or check in problem. But then over the course of the next hour it quickly became clear how severe the outage was.  Not only were check-in systems not working, but the departures information boards had been stuck since 9.30 am.  Even when we got to the gate, which turned out to be the wrong one, there were planes on stand waiting to push back, more aircraft waiting for a gate, and flight crew equally confused.  When we did finally get on board an aircraft the pilot informed us that the flight planning systems weren’t working so he couldn’t create a flight plan, and therefore was unable to work out the correct amount of fuel to put on board, and without that he was unwilling to push back off the stand.  Even when we got the news (first via the BBC) that all flights were cancelled, the pilot told us even the system to cancel flights wasn’t working.  This meant that it getting busses to take us back to the terminal took a long time, followed by the ignominy of having to go back through passport control having not left the airport let alone the country.

From an IT perspective there’s a few interesting aspects.  Firstly BA have claimed this to be a power related incident.  This is an interesting cause.  As far as I’m aware there were no other companies impacted by this outage, which strongly suggests that this is was not in a shared (co-located) data centre, as otherwise we’d have seen other outages.  This also implies that BA aren’t running in the cloud, as we saw no cloud outages over the weekend.  Secondly assuming this was a dedicated BA data centre then there’s been a major failure of resiliency.  I would normally expect of any decent quality data centre that there would be a battery backup to provide power in the immediate follow-up to a power failure.  As soon as there’s been a power failure detected diesel generators should kick in to provide longer term power.  Normally batteries would sit in-line with the external power to smooth the supply and provide instant protection if the external power fails.  At this level of criticality it would be normal to have 2 diverse and ideally separate power suppliers.  The diesel generators are some of the most loved engines in the world, they are often encased in permanently warmed enclosures to keep them at the correct operating temperature.  Quite often the diesel they consume is pre-warmed as well.  This often also is stored in 2 different locations to ensure that if one gets contaminated there’s a secondary supply that can still be used. These engines are often over a million pounds each, and in some sites I’ve seen then have n+n redundancy (if 4 generators are needed there are 8 on site) to deal with 100% failure.  Clearly as a customer you pay more to have this level of redundancy but as we’ve seen over the weekend you never want an incident like this.

In addition to having all this redundancy built into a data centre its vital that all these components are regularly tested.  It’s normal for data centres to test battery back-up and run up the generators at least once a month to ensure all the hardware and processes work as they should in an emergency.

Once you’re inside the data centre, all the racks (where servers are housed) are typically dual powered from different backup batteries, and power supplies, and then each server is dual powered to further protect against individual failures.  In total there are 6 layers of redundancy in between power coming into the data centre and the actual server (redundant Power suppliers, redundant Battery back-up,  redundant power generators, dual power to the rack, dual power supplies to the server, redundant power supplies in the server itself).

As you can see in theory it’s pretty difficult to have a serious power failure.  While it’s possible to have a serious failure in parts of a power supply system, it would be highly unusual for this to be service impacting.

However as we saw in the outage at the weekend something catastrophic must have happened to produce such a widespread outage, and one that seems to have affected BA globally.

Even outside of pure power redundancy most large corporations will have redundancy built into individual systems, be that within the same data centre or in a secondary site (ideally both).  For the more sophisticated sites, these are often what’s known as active-active, i.e. the service is running in both sites at the same time, so if there’s a failure in one server or site the service keeps running but with degraded capacity (the application may appear slower to users), however it is still available.

Most companies will spend at least 7 figures sums annually running with this level of redundancy and will test it regularly (most regulators insist this is at least every two years).  It would appear that for this level of outage and number of systems that failed, either there wasn’t the appropriate level of redundancy or it hasn’t been tested regularly enough.

It’s worth pointing out that all the points mentioned above are expensive, painful to test, and do little to add to the bottom line of the company, but it is just this sort of ‘insurance’ that you never want to rely on, but having thorough and well tested plans makes all the difference when this sort of event happens.

 There’s been lots of reports in the UK press, and comments from unions saying this event is reflective of BA outsourcing its IT services to a third party.  I’m not sure if outsourcing had any impact on the outage, but the mere fact that if BA do outsource their IT it’s an indication that they do not perceive IT to be a core function for BA, as they’ve asked someone else to do it on their behalf.

You may have read many IT articles about Uber being the biggest taxi company and owning no taxis, and airBnB being the biggest hotel chain, but owns no hotels.  It’s clear that both these examples are technology companies not traditional taxi or hotel vendors and therefore with such a reliance on technology they would be expected to have very highly resilient systems that are regularly tested. 

BA however doesn’t fit that model, their biggest expenses wouldn’t be IT, they probably spend significantly more on aircraft, fuel, staff etc.  However when I thought about it, their main systemic risk probably is IT.  If any one model of aircraft was grounded for any reason they use a range of planes in their fleet so this would be impactful, but not catastrophic.  Similarly if one of the unions that some of their staff belong to goes on strike (as we’ve seen in the past) is annoying but not critical.  The same could probably be said for their food or fuel vendors, who probably vary around the world, and so if any one of them fail, they can most likely work around an individual failure.

Not so with IT, it appears that one power failure in one data had the ability to completely cripple one of the biggest airlines in the world.  I cannot believe that BA would have actively known this risk and chosen to run with it.

It the ever increasing digital world we live in every company is slowly turning into a technology company.  Maybe not front facing, but even in a traditional industry such as aviation where aircraft hardware will always be key, this weekend proved you can have all the planes in the world, but if the tech isn’t there to support it, you’ve got no business.