When I first joined Columbia Engineering, in what became my longest job, it was like a quaint small town remarkably untouched by (post)modernity. The rot ravaging the rest of the university felt far away.
Bashful nerds toiled at obscure technical problems devoid of politics, and girls’ gatherings still gave advice on meeting Mr. Right, bearing children, and achieving that elusive work/life balance. It was amazingly retro, and charming contrast to the anxious stridency I’d found elsewhere on campus.
As I gradually put together constructing the school’s 150th anniversary, it had always been a bit of a backwater—the snooty bluebloods heading Columbia at large traditionally dismissed engineering as a marginal appendage to the more socially prestigious College and Schools of Law and Medicine. Even though the 1968 protests/riots that shattered the university were in part against military-industrial contracts with engineering faculty, including for a nuclear reactor in Manhattan, our building had gone unscathed; few demonstrators knew or cared where it was.
Following the sesquicentennial campaign I was retained as a PR jack-of-all-trades bridging engineering and applied science with other fields. Columbia Engineering was on the rise, but couldn’t quite compete on its own against the facilities and cred of places like MIT and Caltech. My role was mainly highlighting splashy collaborations and the romance of life in the big city.
As the 2010s progressed, with tech ever more Columbia’s cash cow and center of gravity, my tasks evolved from jazzing up boring research to showcasing how engineering was enabling larger ideological ambitions—especially regarding climate change and the revolutionary potential of Big Data. Suddenly money and attention rained down, so long as matters pertained to sociopolitical optimization, and what was best about the school began to shrivel and die.
For years I covered our Data Science Institute’s annual Data Science Day, a parade of professors’ TED-ish talks, each spring more topical than before. With enthusiastic literal-mindedness geeky engineers pandered hard to the zeitgeist, particularly how most to maximize socially conscious algorithms. What began as outlines for broad oversight became blueprints for elite supervision, emerging architecture of a proper new order.
The big issue with exploiting massive data sets for social engineering was that they reflected too much inconvenient reality, ill-suited for useful narratives so long as the firehose of unfiltered stats prevailed. The solution, many specialists believed, was algorithmic innovation—honing new means of spiking problematic data or at least boosting the better stuff to feed more progressive machine learning.
With ever-sharper artificial intelligence thus trained for equity-minded global technocracy, disregarding bad data, multidisciplinary computer scientists could turn to an even more ambitious vision: systematically solving inequality via quantified intersectionality putting thumbs on every scale at once.
In accounting for race, gender, income, et al programmers were gaining the power to construct new hierarchies at whim. It was becoming startlingly easy to install novel schemes of advantage and disadvantage, rewards and sanctions, like assigning magic points in games of Final Fantasy.
Nobody ever used the term “social credit system,” but the implications were clear—who controlled the algorithms controlled the future, or at least structured the incentives and life chances of billions. In the western world things were mainly a patchwork of possibilities from labs at companies like Alphabet and Meta, but rapidly coalescing into more comprehensive software for societal management.
Exactly which enlightened referees would administer the system went undefined, presumably experts from the biggest firms, finest schools and highest offices. For the time being it was all about giving them more tools for scientific governance.
A cadre of Columbia engineers’ technocratic dreams dated back to the 1920s, inspired by Russian and Italian central planning, but never before with so many allies or such sway to override others’ objections. Most STEM folk are still essentially nonideological nerds chipping away at challenges too complicated to politicize, but ever more of their fields and funding are being hijacked for dubious utopian agendas.
With institutional credibility crumbling fast it remains to be seen how much can be implemented and entrenched, but the hour grows late. Convenient assumptions and ideologies are already deeply embedded in the array of algorithms increasingly marshaling society, exempt from public scrutiny and the scientific method.
At this point there’s no stuffing the genie back in the bottle—with the digitalization of every aspect of life comes tantalizing totalitarian possibilities too hard for idealists to resist. The era of mere voters affecting much of anything appears to be drawing to a close, human beings but raw material for molding to elite specifications.
Next: Stand, Columbia!
Is this really the future? Sounds very bleak. I don’t feel much hope. What I really want to know is why? Why have so many become co-opted? Whose vision are we implementing? Is there the possibility of turning back?
If one were a faculty member in, say, a sleepy backwater Ivy League engineering school, how could one work with folks in communications to communicate the appropriate level of scientific discovery, balanced with the need for hype, institutional promotion, $$$, and some administrator's social program? Asking for a friend... Thanks!