963 – belatedly learning

So I didn’t post last night. I just got in too late after too many glasses of wine a bit too tired and – well, sod it it’s only a blog.

I was feeling full of thinks though.

There’s a wonderful mini-conference series called ProductTank, and I try to go whenever I can. Sometimes it’s to see friends, sometimes it’s to refresh what I know, sometimes it’s to be reminded how far I’ve come in my adopted career. My latest adopted career.

Last night there was a session on AI and machine learning. Which I genuinely know nothing about. Well nothing recent. I watched some Horizons on the subject in the 80s and 90s and know what Wired can tell you about Google’s latest statistical magic – the latter of which will be about 50% dubious anyway. So I thought it was about time I learned.

I’ve recently become a subscriber to New Scientist, and it’s made me realise how much I’d started to slip into the mindset that the world was largely fixed and most of what could be solved had been solved. There is a continual trickle of crazy amazing near-magic happening all the time, redefining the world for our children, and we don’t hear enough about it because the media don’t really do science and lots of this work will eventually just be one cog in someone else’s product.

So yes, machine learning and AI. That should be good.

The talks were amazing. Azeem Azhar (ironically even his name turns into a veritable battle with autocorrect) framed the work going on around the world, and talked of some of the ethical concerns in this area – particularly reminding us that a) all of our assumptions and features are inherently political, b) beware of your machine learning “normal” from early users who will be ‘a sea of dudes’. I’ve followed him on twitter for ages, and we’ve been one degree of separation from each other for probably sixteen years, but it’s the first time I’ve ever seen him speaking in the flesh.

Shaona Ghosh was amazing on the techniques behind machine learning and I think everyone came away realising that there are no quick and easy solutions here. Not one. It’s a long time since there’s been a ProductTank talk where the slides contained sigma signs – and I bloody loved it. Little long-unused corners of my brain started fizzing slightly – I think it’s an area to carry on looking at, although I also know enough to understand I’ll be forever an amateur.

Finally Chris Auer-Welsbach from IBM Watson talked about interface, and on the idea that AI should be part of serving the user (and I think a there was some stuff about GPUs in there too).

But these three talks together all gave me a slight sense of unease. This is a really hard area with ethical troubles even if you are acting in good faith. But, and this is my big worry, I don’t think most startups truly want to serve their users. I think the temptation to abuse and exploit users is far too strong. I was reminded of the early days of interactive drama¬†when one of my pleas when I gave talks was “for all of our sakes, if any of you do get that massive commission, please don’t take the easy route. Don’t blow it. This medium only has so many chances, and every failed project is another of the nine lives lost. Try to remember the greater good!”

And this is my worry about AI and machine learning – that it only takes a tiny number of bad actors to completely discredit the whole movement, for government to introduce legislation “for the good of consumers” that cuts off whole avenues of potential benefit forever.

And, while I’m often an optimist and believe the best of people, on this occasion I’ve got a feeling of dull inevitability. Someone somewhere Just Won’t Be Able To Help Themselves. Or there will be boardroom pressure and a few shortcuts will be taken.

But please don’t let it be you.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.