Browse through the showcased feeds, or enter a feed URL below.
Something interesting everyday from one of the oldest and best independent sites on the Web.
A feed by Jason Kottke
Permalink - Posted on 2018-04-22 13:53
For the past few years, because of my interest in The Great Span of human history, I’ve been tracking the last remaining people who were alive in the 1800s and the 19th century. As of 2015, only two women born in the 1800s and two others born in 1900 (the last year of the 19th century) were still alive. In the next two years, three of those women passed away, including Jamaican Violet Brown, the last living subject of Queen Victoria, who reigned over the British Empire starting in 1837.
Yesterday Nabi Tajima, the last known survivor of the 19th century, died in Japan at age 117.
Tajima was born in a village on Kikaijima on August 4, 1900. She had 9 children and more than 160 descendants, including great-great-great-grandchildren, according to the Gerontology Research Group (GRG), which verified her date of birth.
At the time of her death, Tajima was 117 years and 280 days old, making her the third oldest person in recorded human history. She said that her secret to longevity was eating delicious things and sleeping well, but she also enjoyed hand-dancing to the sound of the shamisen.
Tajima was born at a time when Emperor Meiji ruled Japan as the nation rose from an isolationist feudal state to become a world power. William McKinley served as president of the United States and Victoria was the Queen of the United Kingdom. The world’s population was just 1.6 billion.
Tajima was already 45 years old when World War II ended…amazing. According to the Gerontology Research Group’s World Supercentenarian Rankings List, the oldest living person is Chiyo Miyako of Japan, who will hopefully turn 117 in a week and a half.
Permalink - Posted on 2018-04-20 19:20
Neubronner developed the pigeon camera for practical purposes. At first, he was simply hoping to track the flights of the birds in his flock. But his invention also represented a more sublime achievement. The images his pigeons captured, featured in “The Pigeon Photographer,” a recent book from Rorhof, are among the very early photos taken of Earth from above (the earliest were captured from balloons and kites) and are distinct for having the GoPro-like quality of channelling animal movement. That perspective that is so commonplace to us now, in which the rooftops stretch out before us as though they were made of a child’s blocks, and people crawl along like ants, was a rare sight when Neubronner took his pigeon pictures. The photos offered a glimpse of the world rendered pocket-size, as it eventually would be via a hundred types of new technology—by airplanes, or skyscrapers, or Google Earth.
But there’s also something a bit wild about the photos, precisely because they were taken by birds. Their framing is random and their angles are askew; sometimes a wing feather obscures the view. Pigeons are surely the most pedestrian of birds, but, looking at these oddly graceful photographs, or at Neubronner’s pictures of the birds looking stately and upright in their photo kits, they start to seem like heavenly creatures.
These pictures remind me quite a bit of the chapters in Paul Saint-Amour’s Tense Future on the relationship between aerial photography and modernist art. (I can’t recall if he mentions the pigeons or not.)
Permalink - Posted on 2018-04-20 18:40
Do you ever read something that feels like it was written just for you? That’s how I feel whenever Craig Mod writes about digital reading. His latest essay, “Reconsidering the Hardware Kindle Interface,” doesn’t have a title that pops unless you 1) love reading; 2) know that Craig is really good at making design talk exciting and accessible.
The big, simple, so obvious that it seems trite to point it out statement here is that hardware buttons on e-readers are good and important. When your primary mode of interaction is to do one or two things over and over again, hardware buttons are really smart and valuable. I’ll let Craig explain why:
Hardware buttons inextricably tie you to a specific interaction model. So for the iPhone to be a flexible container into which anything can be poured it makes most sense to have (almost) no hardware controls.
But the hardware Kindle? Oh, what a wonderful gift for Amazon designers. The Kindle is predictable! We know what we’re getting on almost every page. And the actions of the user are so strictly defined — turn page, highlight, go back to library — that you can build in hardware buttons to do a lot of heavy lifting. And yet! Amazon seems to ignore (to lesser and greater degrees depending on the device) how predictable a hardware Kindle is.
Specifically, dedicated hardware buttons mean that you can remove the amount of unpredictability that happens when you touch the screen. Touching the screen now means “I’m going to interact with the content.”
What benefit comes of making the content of the book a first class object? It removes the brittleness of the current interaction model. Currently —when you tap — you might invoke a menu, a page turn, a bookmark, or a highlight. Meta actions are on a layer above content interactions. A Kindle is just a content container. And so this feels upside down.
Touchscreens work best when they allow direct and explicit engagement with the objects on the screen.
If the content of the book was the only screen object, a tap on a word would instantly bring up the dictionary. A drag would highlight. A single tap on an image would zoom in. Suddenly the text is alive and present. Your interaction with it? Thoughtless. Confident. No false taps. No accidental page turns. No accidental bookmarks. This further simplifies the logic of the touch engine watching for taps in the background, making these interactions faster, programmatic logic simpler.
Doesn’t it just sound like a goddamn delight?
Permalink - Posted on 2018-04-20 16:30
io9 has a solid interview with Dan Gearino, author of a new book called Comic Shop: The Retail Mavericks Who Gave Us A New Geek Culture. It’s about the history of comic book stores, the economics of the industry, how they’ve survived a range of boom-and-bust cycles, and wave after wave of cultural and technological transformation. Here’s an excerpt from the book:
Publishers sell most of their material to comic shops on a nonreturnable basis. By contrast, bookstores and other media retailers—some of which sell the same products as comic stores do—can return unsold goods. The result is that comic shops bear a disproportionately high level of risk when a would-be hit series turns out to be a dud. And there are plenty of duds.
The staff at Laughing Ogre, and at shops across the country, let me into their worlds for what turned out to be a tumultuous year, from the summer of 2015 to the summer of 2016. The two major comics publishers, Marvel and DC, did most of the damage, with many new series that did not catch on, relaunches of existing series that often failed to energize sales, and a monthslong delay for one of the top-selling titles, Marvel’s Secret Wars. The notable failures were almost all tied to periodical comics, single issues that are sold mainly to people who shop as a weekly habit. In other words, the leading publishers spent the year pissing off some of their most loyal customers and undermining their retailers. And yet, much of the sales slide was offset by growth of independent publishers and by small hits such as Princeless, big hits such as the sci-fi epic Saga, and many in between.
Permalink - Posted on 2018-04-20 15:17
In a practice that started in 1865 and still continues today, lectores (storytellers) in Cuban cigar factories read to the workers while they roll cigars. They read the news, novels, horoscopes, recipes…it’s like a live daily radio show or podcast for the workers.
I’m not just a reader; I’m rather a cultural promoter of sorts. I usually try to bring topics that can influence their day-to-day, and help them face certain issues.
(Gee, that sounds like what I do here!) The practice started as a way to educate and entertain workers and eventually helped fuel the Cuban independence movement…a little knowledge goes a long way. Nowadays, the practice is less revolutionary. From a piece in The Economist about lectores:
The workers themselves choose the lectores. “This is the only job in Cuba that is democratically decided,” says an employee. The audience is demanding. Torcedores signal approval by tapping chavetas, oyster-shaped knives, on their worktables; slamming them on the floor shows displeasure. They vote on reading material: Ms Valdés-Lombillo recently finished “A Time to Die” by Wilbur Smith, a South African novelist, and “Semana Santa en San Francisco”, by Agustin García Marrero, a Cuban. When the readings get steamy, torcedores provide an accompaniment of suggestive sound effects. They laugh when a horoscope suggests that someone might inherit a fortune.
This piece in Mental Floss also contains some interesting tidbits:
One lectora, Maria Caridad Gonzalez Martinez, wrote 21 novels over her career. None were published; she simply read them all aloud to her audience.
Permalink - Posted on 2018-04-19 21:37
Fittingly using only off-the-shelf components, a team of researchers in Singapore built a robot capable of assembling a Stefan chair from Ikea (minus actually bolting it together). The assembly time was around 20 minutes, about 5-10 minutes slower than a typical human would take.
It took a few attempts to get it right. Early on, the robots dropped wooden pins, let go of parts too soon, and performed moves that did more to dismantle the chair than assemble it. Some moves required a part to be held by both robots at the same time, and since industrial robots are far stronger than Ikea furniture, a number of mistakes ended badly. “We bought four chair kits and broke a few of them,” said Pham.
Once the robot can fully assemble Ikea furniture in near-human timeframes, I propose we stop all robotics and AI research. When humanity no longer has to struggle with Ikea assembly, we can live like Scandinavian kings and not have to worry about AI murderbots killing us all (before they get bored, of course).
Permalink - Posted on 2018-04-19 20:34
Watson is IBM’s AI platform. This afternoon I tried out IBM Watson’s Personality Insights Demo. The service “derives insights about personality characteristics from social media, enterprise data, or other digital communications”. Watson looked at my Twitter account and painted a personality portrait of me:
You are shrewd, inner-directed and can be perceived as indirect.
You are authority-challenging: you prefer to challenge authority and traditional values to help bring about positive changes. You are solemn: you are generally serious and do not joke much. And you are philosophical: you are open to and intrigued by new ideas and love to explore them.
Experiences that give a sense of discovery hold some appeal to you.
You are relatively unconcerned with both tradition and taking pleasure in life. You care more about making your own path than following what others have done. And you prefer activities with a purpose greater than just personal enjoyment.
- Watson doesn’t use Oxford commas?
- Shrewd? I’m not sure I’ve ever been described using that word before. Inner-directed though…that’s pretty much right.
- Perceived as indirect? No idea where this comes from. Maybe I’ve learned to be more diplomatic & guarded in what I say and how I say it, but mostly I struggle with being too direct.
- “You are generally serious and do not joke much”… I think I’m both generally serious and joke a lot.
- “You prefer activities with a purpose greater than just personal enjoyment”… I don’t understand what this means. Does this mean volunteering? Or that I prefer more intellectual activities than mindless entertainment? (And that last statement isn’t even true.)
Watson also guessed that I “like musical movies” (in general, no), “have experience playing music” (definite no), and am unlikely to “prefer style when buying clothes” (siiiick burn but not exactly wrong). You can try it yourself here. (via @buzz)
Update: Ariel Isaac fed Watson the text for Trump’s 2018 State of the Union address and well, it didn’t do so well:
Trump is empathetic, self-controlled, and makes decisions with little regard for how he show off his talents? My dear Watson, are you feeling ok? But I’m pretty sure he doesn’t like rap music…
Permalink - Posted on 2018-04-19 19:00
A report called What We Get Wrong About Closing the Racial Wealth Gap was released this month by a group of economists and researchers from Samuel DuBois Cook Center on Social Equity at Duke University and the Insight Center for Community Economic Development. They report that the racial wealth gap in the United States is “large and shows no signs of closing”; this holds true at all levels in the wealth spectrum:
The white household living near the poverty line typically has about $18,000 in wealth, while black households in similar economic straits typically have a median wealth near zero. This means, in turn, that many black families have a negative net worth.
The 99th percentile black family is worth a mere $1,574,000 while the 99th percentile white family is worth over 12 million dollars. This means over 870,000 white families have a net worth above 12 million dollars, while, out of the 20 million black families in America, fewer than 380,000 are even worth a single million dollars. By comparison, over 13 million of the total 85 million white families are millionaires or better.
The authors then address ten common myths about the racial wealth gap, many of which are just straight-up racist — if only blacks just worked harder, saved more, learned more about financial literacy, etc. — particularly the one about black family disorganization:
The increasing rate of single parent households is often invoked to explain growing inequality, and the prevalence of black single motherhood is often seen as a driver of racial wealth inequities. These explanations tend to confuse consequence and cause and are largely driven by claims that if blacks change their behavior, they would see marked increases in wealth accumulation. This is a dangerous narrative that is steeped in racist stereotypes.
Single motherhood is a reflection of inequality, not a cause. White women still have considerably more wealth than black women, regardless whether or not they are raising children. In fact, single white women with kids have the same amount of wealth as single black women without kids. Recent research also reveals that the median single-parent white family has more than twice the wealth of the median black or Latino family with two parents. These data show that economic benefits that are typically associated with marriage will not close the racial wealth gap (Traub et al. 2017). Having the “ideal” family type does not enable black households to substantially reduce the racial gulf in wealth.
And overall, the authors conclude that the wealth gap is structural in nature, cannot be solved through the individual actions of blacks, and can only be solved through “a major redistributive effort or another major public policy intervention to build black American wealth”.
These myths support a point of view that identifies dysfunctional black behaviors as the basic cause of persistent racial inequality, including the black-white wealth disparity, in the United States. We systematically demonstrate here that a narrative that places the onus of the racial wealth gap on black defectiveness is false in all of its permutations.
We challenge the conventional set of claims that are made about the racial wealth gap in the United States. We contend that the cause of the gap must be found in the structural characteristics of the American economy, heavily infused at every point with both an inheritance of racism and the ongoing authority of white supremacy.
Gosh, it’s almost like if one group of people owned another group of people for hundreds of years — like the wealth of the group was literally the bodies, minds, and souls of the members of the other group — and then systematically and economically discriminated against them for another 100+ years, it’s nearly impossible for them to catch up. (via @eveewing)
Permalink - Posted on 2018-04-19 17:27
Steven Johnson, the author of the recent Wonderland and a whole gaggle of other books in the kottke.org wheelhouse,1 is coming out with a new book in September called Farsighted: How We Make the Decisions That Matter the Most.
Plenty of books offer useful advice on how to get better at making quick-thinking, intuitive choices. But what about more consequential decisions, the ones that affect our lives for years, or centuries, to come? Our most powerful stories revolve around these kinds of decisions: where to live, whom to marry, what to believe, whether to start a company, how to end a war.
Full of the beautifully crafted storytelling and novel insights that Steven Johnson’s fans know to expect, Farsighted draws lessons from cognitive science, social psychology, military strategy, environmental planning, and great works of literature. Everyone thinks we are living in an age of short attention spans, but we’ve actually learned a lot about making long-term decisions over the past few decades. Johnson makes a compelling case for a smarter and more deliberative decision-making approach. He argues that we choose better when we break out of the myopia of single-scale thinking and develop methods for considering all the factors involved.
In a post on his website, Johnson explains where the idea for the book came from and some specific stories that can be found in its pages.
Some of the threads bring back characters from my earlier works: The Invention Of Air’s Joseph Priestley and Ben Franklin make an important cameo in the opening pages, and the book examines two key turning points in the life of Charles Darwin, building on the Darwin stories woven through Good Ideas. But there are also stories drawn from critical decisions in urban planning — New York’s decision to bury Collect Pond in the early 1800s, and to build the High Line in the early 2000s — alongside stories of hard choices drawn from military history, most notably the decision process that led to the raid on Osama Bin Laden’s compound in 2011. There are insights drawn from cognitive science, behavioral psychology, and sociology. But it is also in many ways a book about the importance of storytelling. There’s as much Middlemarch in the book as there is modern neuroscience.
Every so often, I am asked why I don’t write a book, “you know, like kottke.org but in book form”. There are many answers to that, but one of the biggest is that Steven Johnson writes the books that I would write in the way I would want to write them, except he does it way better than I would. I’m aware this is perhaps a dumb reason, but it’s infinitely easier and more enjoyable for me to just read his books that to bother working on my own.↩
Permalink - Posted on 2018-04-19 14:45
Yesterday, hip hop legend Lauryn Hill announced The Miseducation Of Lauryn Hill 20th Anniversary Tour 2018.
This summer marks the 20th anniversary of seminal hip-hop album The Miseducation Of Lauryn Hill, and Lauryn Hill is marking the occasion with a special anniversary tour dedicated to the album. Hill will be performing Miseducation in full, and each stop on the tour will feature “special guest performers” that haven’t been named yet. Plus, a portion of ticket sales will be donated to Hill’s MLH Foundation, which backs a huge group of charities built to help people all over the world-including the Africa Philanthropic Foundation, Appetite For Change, Apps & Girls, and the Equal Justice Initiative.
And Nerdwriter’s Evan Puschak, always with his ear to the ground (or perhaps with his ear to Drake’s Nice for What), just came out with this mini-doc celebrating of Hill’s music, influences, and people she’s influenced:
This might be one of the best Nerdwriter videos yet: no commentary, just clips of Hill performing and talking, music she was influenced by, and people & music that were influenced by her…an impressionistic portrait of a significant and uncompromising artist.
Permalink - Posted on 2018-04-18 22:16
This morning I ran across news from two different studies about reducing deaths from opioid overdoses and they both had the same solution: medication-assisted treatment. First, from a study involving inmates in Rhode Island correctional facilities:
The program offers inmates methadone and buprenorphine (opioids that reduce cravings and ease withdrawal symptoms), as well as naltrexone, which blocks people from getting high.
The data set is small but the results are encouraging: there were fewer overdose deaths of former inmates after the program was implemented in 2016.
In 1995, France made it so any doctor could prescribe buprenorphine without any special licensing or training. Buprenorphine, a first-line treatment for opioid addiction, is a medication that reduces cravings for opioids without becoming addictive itself.
With the change in policy, the majority of buprenorphine prescribers in France became primary-care doctors, rather than addiction specialists or psychiatrists. Suddenly, about 10 times as many addicted patients began receiving medication-assisted treatment, and half the country’s heroin users were being treated. Within four years, overdose deaths had declined by 79 percent.
Permalink - Posted on 2018-04-18 21:01
The proprietor of the @brutsinlego account and his/her children build simple Brutalist structures out of Lego and post the results to Instagram.
BTW, the term Brutalist does not refer to the frequently brutal (adj. “direct and lacking any attempt to disguise unpleasantness”) appearance of buildings built in this style, but after the French term béton brut (raw concrete) that describes the unfinished concrete surfaces of these buildings.
Further BTW: Google Translate variously translates “brut” to “gross”, “raw”, “crude”, “undefined”, “dry”, and “rude”. Brut and brutal also likely have the same Latin root, so to some extent, the assumption that Brutalism refers to the blunt appearance of these buildings has some merit.
Permalink - Posted on 2018-04-18 19:33
The use of satellite imagery has revolutionized many areas of science and research, from archaeology to tracking human rights abuses to (of course) climate science. This vantage point makes different sorts of observations possible than looking at ground level does.
In what she calls “a work in progress”, Jia Zhang, a PhD candidate at MIT Media Lab, used census data to collect chunks of satellite images from areas with the highest concentrations of white, black, Asian, and Native American & Alaska Native people. The result is striking (but perhaps not surprising):
I’m looking forward to seeing more of Zhang’s work in this area.
Permalink - Posted on 2018-04-18 17:14
In this short film, animator and director Ainslie Henderson talks about how he designs puppets for his stop motion animations, creating a charming little stop motion music video in the process.
Puppet-making often begins by just gathering stuff, like materials that I find attractive. Wood and sticks and wire and leaves and flowers and petals and bits of broken electronics…things that have already had a life are lovely to have in puppets.
Permalink - Posted on 2018-04-18 15:00
Yesterday a Southwest flight from NYC to Dallas experienced an in-flight engine explosion and had to make an emergency landing in Philadelphia. The explosion tore a hole in the fuselage and a passenger started to get sucked out of the hole before being pulled back in (she subsequently died). As Wired’s Jack Stewart notes in an informative piece about how emergencies like this are handled, the plane’s pilot sounded remarkably calm in her communications with air traffic control:
The pilots don’t reach out to air traffic control until that descent is underway. “Something we teach students from day one is aviate, navigate, communicate — in that order,” says Brian Strzempkowski, who trains pilots at Ohio State University’s Center for Aviation Studies.
“They’d say mayday three times, say their call sign, engine failure, descending to 10,000 on heading of XYZ,” says Moss. The pilot, air traffic controllers, and an airline dispatch unit work to find the best airport for an emergency landing. In less critical circumstances, it may be better to fly a little farther to a larger airfield with more facilities, but in extreme emergencies — such as this one — the pilot can ask for priority, and the controllers will clear the path for her to land at the closest runway, in any direction.
As terrifying as this looks, the pilot talking to air traffic control sounded remarkably calm. “We have a part of the aircraft missing, so we’re going to need to slow down a bit,” she said.
You can listen to the air traffic control audio here:
The pilot, Tammie Jo Shults, was a Navy fighter pilot, so that explains some of her chill. And Neil Armstrong’s combat experience in the Navy surely contributed to his calmness when he took manual control to steer the LM around an unsuitable landing site w/ very little fuel left while trying to land on the surface of the dang Moon with unknown alarms going off — you can read all about it here and listen to Armstrong, Buzz Aldrin, and Mission Control discussing the whole thing here as if they’re trying to decide on a lunch place.
But the Navy angle is not the whole story. I’ve talked a bit before about my dad, who was a working pilot when I was a kid. He was sometimes not the most relaxed person on the ground, but at the controls of a plane, he was always calm and collected.
It was a fine day when we set out but as we neared our destination, the weather turned dark. You could see the storm coming from miles away and we raced it to the airport. The wind had really picked up as we made our first approach to land; I don’t know what the windspeed was, but it was buffeting us around pretty good. About 50 feet off the ground, the wind slammed the plane downwards, dropping a dozen feet in half a second. In a calm voice, my dad said, “we’d better go around and try this again”.
The storm was nearly on top of us as we looped around to try a second time. It was around this time he announced, even more calmly, that we were “running a little low” on fuel. Nothing serious, you understand. Just “a little low”.
How these pilots talk is not an accident. That characterless voice emanating from the flight deck during the boarding process telling you about your destination’s weather sounds conversationally beige…until something like losing an engine at 30,000 feet happens and that exact same voice, and the demeanor that goes with it, takes on a razor’s edge of magnificent competence and steadiness and even heroism.
Permalink - Posted on 2018-04-17 20:41
Computer scientist, mathematician, and all-around supergenius Alan Turing, who played a pivotal role in breaking secret German codes during WWII and developing the conceptual framework for the modern general purpose computer, was also a cracking good runner.
He was a runner who, like many others, came to the sport rather late. According to an article by Pat Butcher, he did not compete as an undergraduate at Cambridge, preferring to row. But after winning his fellowship to King’s College, he began running with more purpose. He is said to have often run a route from Cambridge to Ely and back, a distance of 50 kilometers.
It’s also said Turing would occasionally sometimes run to London for meetings, a distance of 40 miles. In 1947, after only two years of training, Turing ran a marathon in 2:46. He was even in contention for a spot on the British Olympic team for 1948 before an injury held him to fifth place at the trials. Had he competed and run at his personal best time, he would have finished 15th.
As the photo above shows, Turing had a brute force running style, not unlike the machine he helped design to break Enigma coded messages. He ran, he said, to relieve stress.
“We heard him rather than saw him. He made a terrible grunting noise when he was running, but before we could say anything to him, he was past us like a shot out of a gun. A couple of nights later we caught up with him long enough for me to ask who he ran for. When he said nobody, we invited him to join Walton. He did, and immediately became our best runner… I asked him one day why he punished himself so much in training. He told me ‘I have such a stressful job that the only way I can get it out of my mind is by running hard; it’s the only way I can get some release.’”
I found out about Turing’s running prowess via the Wikipedia page of non-professional marathon runners. Turing is quite high on the list, particularly if you filter out world class athletes from other sports. Also on the list, just above Turing, is Wolfgang Ketterle, a Nobel Prize-winning physicist who ran a 2:44 in Boston in 2014 at the age of 56.
Permalink - Posted on 2018-04-17 18:36
Photographer Pelle Cass has been constructing composite photos of groups of people for some time now, photoshopping the action from dozens of photos into a single frame.
With the camera on a tripod, I take many dozens of pictures, and simply leave in the figures I choose and omit the rest. The photographs are composite, but nothing has been changed, only selected. My subject is the strangeness of time, the exact way people look, and a surprising world that is visible only with a camera.
More recently, Cass has turned his attention to sporting events, capturing competitors playing basketball, diving, playing lacrosse, running track, and playing hockey. The project is called Crowded Fields; it’s not up on his website yet, but you can see some of the images on Instagram and Booooooom.
I love this sort of thing, whole stretches of time compressed into single frames or short videos. See also time merge media, Peter Funch’s Babel Tales, Dennis Hlynsky’s bird contrails, and busy day at the airport. (via colossal)
Permalink - Posted on 2018-04-17 16:19
John Corcoran was slow to talk as a child and then when he got to school, he didn’t learn to read right away. Or in the years following. He graduated from high school and college not being able to read or write…and then got a job teaching high school.
So I graduated from college, and when I graduated there was a teacher shortage and I was offered a job. It was the most illogical thing you can imagine — I got out of the lion’s cage and then I got back in to taunt the lion again.
Why did I go into teaching? Looking back it was crazy that I would do that. But I’d been through high school and college without getting caught — so being a teacher seemed a good place to hide. Nobody suspects a teacher of not knowing how to read.
I taught a lot of different things. I was an athletics coach. I taught social studies. I taught typing — I could copy-type at 65 words a minute but I didn’t know what I was typing. I never wrote on a blackboard and there was no printed word in my classroom. We watched a lot of films and had a lot of discussions.
I remember how fearful I was. I couldn’t even take the roll — I had to ask the students to pronounce their names so I could hear their names. And I always had two or three students who I identified early — the ones who could read and write best in the classroom — to help me. They were my teaching aides. They didn’t suspect at all — you don’t suspect the teacher.
This story is not very complimentary about the US educational system (or society for that matter). BTW, I’m not sure it mattered very much that Corcoran taught while illiterate. For all we know, he was a good teacher whose discussion-based methods and empowerment of student-teachers were more effective than multiple choice tests in fostering learning. I’m much more bothered that he didn’t get the help he needed as a child…and about all the assumptions about reading and learning that are built into our educational system.
Permalink - Posted on 2018-04-17 14:13
The Boston Marathon was run yesterday under terribly rainy and windy conditions and many of the top competitors didn’t do so well. But as Dennis Young explains, that made room for some unusual names at the top of the winners’ list. The winner on the men’s side was Yuki Kawauchi, an amateur Japanese runner who runs in about one marathon a month (the elite pro runners only do ~2-3 a year), trains in his spare time from his government job, but has run the most sub-2:12 marathons ever.
This was at least his 71st competitive marathon since the beginning of 2012-averaging just under one a month. Overall, he’s run in at least 81 marathons.
He’s run 26 of them faster than 2:12 and 79 of them under 2:20. Both of those numbers are world records.
In January, Kawauchi ran a 2:18:59 marathon in Marshfield, Massachusetts in one-degree weather. He was the only finisher.
That race gave him the most marathons ever run under 2:20; he finished two more between then and Boston. (Obviously he was the only one of his competitors to have already run a marathon this year. Today was his fourth of 2018.)
Oh, and to prep for Boston, he ran a half-marathon in a panda suit. More on Kawauchi and his unusual training methods here. On the women’s side, Desi Linden was the first American woman to win the race in 33 years, beating the field by over four minutes, even after she hung back mid-race to help a fellow American runner re-join the pack.
She told an interviewer on the broadcast that she felt so bad early on that she figured she’d do what she could to help an American win. When Shalane Flanagan sprinted off the course for a bathroom break roughly 12 miles in, it was Linden who hung back and waited for Flanagan before helping her re-catch the pack. A little more than an hour later, Linden had the title wrapped up.
The women’s second place finisher was perhaps even more surprising. Like Kawauchi, Sarah Sellers is an amateur runner with a full-time job (she’s a nurse in Arizona), but unlike the prolific Japanese marathoner, Boston was only Sellers’ second marathon. She didn’t believe she’d gotten second, even when officials told her, which reminded me of Ester Ledecka’s Super-G victory in the 2018 Winter Olympics.
In what other highly visible and competitive sport can amateurs fare so well against professionals? Aside from the accountant who recently played goalie in an NHL game, it’s nearly unimaginable for an amateur to step into one of the major team sports and compete at a high level. Maybe golf?
Permalink - Posted on 2018-04-16 21:42
50 years ago this month, Stanley Kubrick’s 2001: A Space Odyssey premiered in the US. For this week’s issue of the New Yorker, Dan Chiasson looks at the cultural impact of the film, which got off to a rocky start.
Fifty years ago this spring, Stanley Kubrick’s confounding sci-fi masterpiece, “2001: A Space Odyssey,” had its premières across the country. In the annals of audience restlessness, these evenings rival the opening night of Stravinsky’s “Rite of Spring,” in 1913, when Parisians in osprey and tails reportedly brandished their canes and pelted the dancers with objects. A sixth of the New York première’s audience walked right out, including several executives from M-G-M. Many who stayed jeered throughout. Kubrick nervously shuttled between his seat in the front row and the projection booth, where he tweaked the sound and the focus. Arthur C. Clarke, Kubrick’s collaborator, was in tears at intermission. The after-party at the Plaza was “a room full of drinks and men and tension,” according to Kubrick’s wife, Christiane.
Chiasson references a 1966 profile of Kubrick in the New Yorker by Jeremy Bernstein, which catches the filmmaker in the act of making 2001.
In addition to writing and directing, Kubrick supervises every aspect of his films, from selecting costumes to choosing the incidental music. In making “2001” he is, in a sense, trying to second-guess the future. Scientists planning long-range space projects can ignore such questions as what sort of hats rocket-ship hostesses will wear when space travel becomes common (in “2001” the hats have padding in them to cushion any collisions with the ceiling that weightlessness might cause), and what sort of voices computers will have if, as many experts feel is certain, they learn to talk and to respond to voice commands (there is a talking computer in “2001” that arranges for the astronauts’ meals, gives them medical treatments, and even plays chess with them during a long space mission to Jupiter-“Maybe it ought to sound like Jackie Mason,” Kubrick once said), and what kind of time will be kept aboard a spaceship (Kubrick chose Eastern Standard, for the convenience of communicating with Washington). In the sort of planning that nasa does, such matters can be dealt with as they come up, but in a movie everything is immediately visible and explicit, and questions like this must be answered in detail. To help him find the answers, Kubrick has assembled around him a group of thirty-five artists and designers, more than twenty special-effects people, and a staff of scientific advisers. By the time the picture is done, Kubrick figures that he will have consulted with people from a generous sampling of the leading aeronautical companies in the United States and Europe, not to mention innumerable scientific and industrial firms. One consultant, for instance, was Professor Marvin Minsky, of M.I.T., who is a leading authority on artificial intelligence and the construction of automata. (He is now building a robot at M.I.T. that can catch a ball.) Kubrick wanted to learn from him whether and if the things that he was planning to have his computers do were likely to be realized by the year 2001; he was pleased to find out that they were.
A new book by Michael Benson, Space Odyssey: Stanley Kubrick, Arthur C. Clarke, and the Making of a Masterpiece, looks back at how the film was made. The visual effects are one of the reasons the film is so celebrated today; Vulture took a quick look at four of the most influential effects:
The ending of the film can still be puzzling after several viewings — deliberately so, according to Kubrick — but ScreenPrism took a crack at a literal explanation of the Giant Space Baby et al.:
Kubrick himself explained the plot of 2001 in a 1969 interview in just two paragraphs:
You begin with an artifact left on earth four million years ago by extraterrestrial explorers who observed the behavior of the man-apes of the time and decided to influence their evolutionary progression. Then you have a second artifact buried deep on the lunar surface and programmed to signal word of man’s first baby steps into the universe — a kind of cosmic burglar alarm. And finally there’s a third artifact placed in orbit around Jupiter and waiting for the time when man has reached the outer rim of his own solar system.
When the surviving astronaut, Bowman, ultimately reaches Jupiter, this artifact sweeps him into a force field or star gate that hurls him on a journey through inner and outer space and finally transports him to another part of the galaxy, where he’s placed in a human zoo approximating a hospital terrestrial environment drawn out of his own dreams and imagination. In a timeless state, his life passes from middle age to senescence to death. He is reborn, an enhanced being, a star child, an angel, a superman, if you like, and returns to earth prepared for the next leap forward of man’s evolutionary destiny.
And there’s much more to explore about 2001 in the kottke.org archives.