The rise of ‘social bots’ has reached a tipping point in 2016, with four of the largest technology behemoths, Microsoft, Amazon, Google and Facebook, putting their commercial and technological weight behind bot technology and announcing commercial products. This has been precipitated by the explosion of social media and has given these technologies an incredibly rich ground within which to learn, cause mischief and interact with their human users. The journey to this point has been long and tumultuous with these new technologies causing legal questions throughout their development. Parallel to this, the advent of hyper-scale, cloud-based systems and data analysis, combined with advanced artificial intelligence techniques means the potential for this convergent technology to shake the foundations of our privacy and even legal frameworks needs to be considered.
This paper will examine the history of these social bots, from their humble start as web crawlers and simple, action-orientated algorithms, to the more complicated self-learning bots that roam social networks such as Twitter and Facebook. Furthermore, we will look at the complications that arise from teaching bots from social data and explore whether the algorithmic filtering of content used to increase consumer engagement skews or biases a bot’s ability to interact meaningfully across a range of audiences. Finally, we will look at the legal ramifications of social bots as their technology matures by examining the effects of filter bubbles, data collection, social trust and whether bots can have intent. Continue reading “Chatbots, social media and the law”
It is estimated that over 90% of vehicular accidents are caused by human error and inattention (Eugensson, Brännström, Frasher, Rothoff, Solyom & Robertsson, 2013 and Goodall, 2014b) and with the gathering momentum of autonomous vehicle (AV) technology, we are close to the cusp of eliminating a large number of fatalities associated with personal transport. While advances in machine vision and learning are propelling the industry forward, the field of machine ethics still lags (Powers, 2011) but with each technological advance, we are getting closer to an inevitability: our vehicles will soon be making ethical decisions on our behalf.
This paper will discuss whether autonomous vehicles should always swerve around children, even if that means hitting other people. To understand the complexities behind a seemingly simple question, we must look more holistically at the state of decision-making technologies and borrow ‘value of life’ quantisation metrics from the healthcare and insurance fields but first it is important to look more generally at the wider questions of how humans make ethical and moral decisions using abstract thought experiments and modelling. Continue reading “Why Robot Cars Should Kill our Children”
I’m currently doing a Masters in Law at the University of Edinburgh — it’s a distance-learning, 100% online course over 20 months. During the summer break, I also started a new job which meant I was given a brand new Macbook Pro to work on. As I got my head around OSX for the first time, I enjoyed using it and found my Surface Pro 2 gathering dust. As such, I sold it and found myself entering my second semester in September with just a Macbook Pro, an iPad Mini, a Windows 10 desktop and a brand-new set of highlighters.
What a mistake. Continue reading “Surface Pro & OneNote: Why I lasted less than a semester without them”
The International Association of Athletics Federations (IAAF) continues to be in the news recently regarding allegations of widespread doping and subsequent cover-ups. While reading all this coverage, it reminded me of an essay I wrote for my law Masters discussing the premise that doping (and, further in the future, genetic enhancement) was a natural progression of sport and therefore a positive thing. I’ve re-written and expanded on portions here to make it a little more readable. Let the debate commence!
Bleeding edge science often finds itself chasing a moving target. For example, in 1997, chess was seen as the epitome of human intelligence yet as soon Deep Blue beat Garry Kasparov, many were quick to extoll the differences between intuition and algorithms and the beauty of the mind’s expertise with the brute force of assessing 200 million chess moves a second. Just as Deep Blue generated discussions on the definitions of “true” intelligence, humanity and many peripheral philosophical discussions, the massive advances in pharmaceutical sciences and, more recently, genetic therapies have opened a similar frontier in athletics —with one key difference.
Artificial intelligence is just that: artificial. Doping and genetic therapies are about “improving” humans and our own abilities to perform and push those boundaries. So while as a society, humans ordinarily embrace and celebrate our technological advances, recent years they have started to impinge on what it means to actually be human. However, ever since humans have competed in sporting and athletic events, they have tried all means available to them to achieve their best possible results.
As such, when Julian Savulescu, Professor of Practical Ethics at the University of Oxford, argued that “[g]enetic enhancement is not against the spirit of sport; it is the spirit of sport” it was guaranteed to divide opinions. So, was he right? Continue reading “Chasing a Moving Target: Has Anti-Doping Failed?”
Over the last decade, the price of genome sequencing has plummeted, leading to the rise of direct-to-consumer personal genomic testing (DTC PGT). The greater accessibility to this genetic information has led to increasing concerns about the validity of this data, the economic burden it could place on the medical and insurance industries and the psychological impact it could have on those unfamiliar with this new information.
Additionally, as PGT becomes more prevalent, so too will the vast quantity of valuable data that these services generate. This has created fears around privacy and the use of personal genomic data in academic research, insurance premium calculations and their effect on existing legislation. All these concerns have led to a rise in calls to increase the regulation that governs the industry.
While there are many valid concerns with DTC PGT, particularly with respect to the quality and efficacy of the information these commoditised services offer, there are also real fears that over-regulation could stifle innovation and prevent consumers access to their own personal genetic data. This paper aims to show that despite the trepidation with DTC PGT, tighter regulation that limits individuals’ access to the information is not the answer. Instead, legislation should focus on the quality, reporting and oversight of the information presented to the consumer, as well as addressing privacy concerns. Continue reading “This is How I’m Going to Die: Consumer Genetics and Our Access to Knowledge”
A couple of months ago, the Telegraph published a superb piece of shoddy journalism by claiming there were only 100 cod left in the North Sea. This came from the Sunday Times’ equally misguided claim that there were 100 adult cod left. This in turn was picked up by other mainstream media outlets, no doubt triggering a run on local fish and chip shops across the UK before our favourite fish was declared extinct. DEFRA (Department for Environment, Food and Rural Affairs) was quick to publish a release saying the Sunday Times was off by a staggering 21 million fish.
How on earth do respectable media companies like this get basic maths and science so wrong? Worse, why does this seem to be a systemic issue throughout all of media. Why do journalists struggle with the most basic understanding of numbers?
This has always frustrated me but after recently reading Ben Goldacre’s Bad Science, it became very apparent that this incompetence with mathematics had effects that stretched far beyond frustrating the reader. Much of the damage lies in the misunderstanding or underappreciation of the underlying statistics.
Yet a brief look at the history of statistics reveals it wasn’t always like this. Continue reading “Mainstream Media and Maths: We’re All Going to DIE!”
A few months ago, I was talking with a friend about Universal Darwism and the power of genetics, and without hesitation she asked, “Well, what about homosexuality? Surely that would have been removed from the gene pool centuries ago?”. As the evolutionary pragmatist, I completely agreed with her. If being gay was indeed nature over nurture, it would have been removed from the gene pool millenia ago, but the liberal in me felt that homosexuality must be genetic (as opposed to a life choice etc.). This conflict (along with the obvious observation that homosexuality is still pervasive in society) had me utterly stumped.
Let’s tackle the problem of evolving homosexuality out of the gene pool first — why did I believe that homosexuality should have been breed out? Imagine two groups, Group A, which makes up 99.9% of the population and Group B, which makes up the remaining 0.1%. Now imagine that Group B is a mere 1% more efficient at reproducing than Group A. The chart below shows how the two populations would progress (as a percentage of the total population) if Group A had 2 children each and Group B at 2.02 (1% more).
Unbelieveably, despite only making up 0.1% of the initial population, it takes as little as 700 generations for Group B to equal Group A and under 1,400 for their positions to be reversed! Anyone with a financial/economic background will appreciate the power of compound interest here, the same phenomena is exacerbated when applied to evolution, given both the size of the populations and years over which we look at the effects.
So what does this have to do with homosexuality? Simple — homosexual partners cannot reproduce (naturally), therefore cannot pass their “gay gene” down to their children. So, as in the example above, if a 1% decrease in reproductive capability leads to practical extinction within 1,400 generations, how has homosexuality (with a 100% decrease in reproductive capability!) not been erased after 50,000 years? Continue reading “Homosexuality: Nature’s True Altruists?”
I’ve barely touched Twitter in the last 3 months — it isn’t that I’m not tweeting, I’m not even checking it. This seemed to happen shortly after I read The Shallows by Nicholas Carr, another great book talking about humanity’s adolescent approach to the Internet at the moment. While the overall tone of the book was a little too “doomsday” for me, he had some fantastic ideas on the importance that immediacy has gained in the last few years.
24 hour news, Twitter feeds, Facebook statuses, news aggregators and near real-time search engines are pumping information at us at a pace we’ve never experienced before. As consumers, we’re all expecting and demanding this sort of information, as we’d check Facebook and expect the latest news from our friends, check the BBC News website for minute-by-minute updates etc. However, over time this has had the undesired fact that this immediacy (or at least desire for) is now pervasive.
How many people have noticed their attention span has significantly decreased over the last few years? How many people feel like they’re undergoing withdrawal symptoms when they leave their phone at home? How many of you actively crave your information fix?
I do. Continue reading “The Unimportance of Immediacy”
A few weeks ago I updated my Kohonen neural network code to support circular rows and columns as well as some simple additional visualizations which allowed for some interesting experimentation.
Circular rows and columns follow a simple premise — the neighbourhood influence effect of training can now wrap around both row and column, allowing for circular, cylindrical and torodial geometries.
Think of it like this, a network with a single row wrapped around allows for a network to learn a circular topology. A nice, simple example of this is the Travelling Salesman Problem as you need one continuous, circular route to be determined. Below there is an example of a Kohonen network run over 350 iterations. The initial phase starts to spread the nodes out over the map, with the second phase making more localized adjustments:
- Kohonen-based TSP Solver
It is worth noting that the Kohonen-based solver isn’t perfect, sometimes choosing interim junctions where there is no city, but it is an interesting, reasonably practical use of a different kind of Kohonen topology. Continue reading “Kohonen Neural Network Experimentations”
There has been a storm on Twitter and (due to Robert Scoble’s involvement) Google+ recently over bloggers, journalists and the relationship between the two. It seems that Dan Lyons, a technology journalist for Newsweek, was upset — “Hit men, click whores and paid apologists: Welcome to the Silicon Valley cesspool” — and took to his personal blog to vent. Aiming at Michael Arrington and MG Siegler will inevitably draw fire but a week or so later, Lyons has further drawn on the ire of the social media elite by accusing Robert Scoble of trying to employ similar tactics. Scoble very publically rebutted Lyons’ arguments but the argument continues…
For me, a lot of this seems to boil down to one thing: the consumerization of technology journalism. Continue reading “The Consumerization of IT (Journalism)”