Credit: H. Armstrong Roberts/ClassicStock/ Archive Photos via Getty Images
Author Kristina Wright
November 13, 2024
Love it?53
Ready to take a trip down memory lane? Before tablets, touchscreens, and Wi-Fi, toys were all about tactile fun, imagination, and the joy of hands-on play. Today, retro toys carry a special charm, reminding us of simpler times when even a bouncy spring or simple building blocks could offer hours of entertainment. In a world where tech toys are constantly evolving, these classics have stayed true to their roots — some have barely changed from their original designs, while others have adapted for new audiences in surprising ways.
Whether you’re looking to reconnect with your childhood favorites or introduce a new generation to the magic of these timeless playthings, these retro toys will bring a touch of nostalgia.
With the tagline “Knock his block off!” and a comic book-worthy illustration on the box, Rock ’Em Sock ’Em Robots quickly captivated audiences when they were introduced in 1965 by toy designer Marvin Glass. This two-person game featuring boxing robots Red Rocker and Blue Bomber inspired many a playful boxing match in the decades that followed, and is still capturing imaginations today. A live-action movie starring Vin Diesel is rumored to be in the works, but until these toy robots hit the big screen, you can find them at Walmart for $21.92.
Long before virtual reality headsets, the View-Master offered a window to the world, displaying stereoscopic 3D images of popular travel destinations. The original version, known as Sawyer’s View-Master, was introduced at the 1939 New York World’s Fair and was intended to replace the traditional postcard. The stereoscope quickly became popular with adults and children alike. You can find a contemporary version of this classic toy at Target for $9.99. For a more personalized experience, you can customize your own reel viewer disk at Uncommon Goods — it’s $34.95 for one reel viewer and a reel redemption code, and $16.95 for each additional reel.
Invented by mechanical engineer Richard James at a Philadelphia shipyard in 1943, the Slinky was an accidental discovery that came out of the development of a line of sensitive springs designed to stabilize fragile equipment on ships. Two years later, James demonstrated the toy — named by his wife, Betty James — at Gimbels department store in Philadelphia and sold his entire stock of 400 toys within 90 minutes. Today, the Slinky comes in a variety of colorful, plastic styles, but the original Slinky is still the most popular version. You can buy it on Amazon for $3.59.
Created by two employees at Marvin Glass and Associates and sold by Ideal in 1963, Mouse Trap was one of the first mass-produced 3D board games. The game’s engaging Rube Goldberg-style design involves collaborative and competitive gameplay. Unlike other board games where players race or strategize to a simple endpoint, Mouse Trap’s interactive design is all about players collaborating to build an intricate mechanism where each piece sets off a chain reaction, ultimately resulting in a tiny plastic cage dropping down to “trap” an opponent’s mouse-shaped game piece. Originally designed as a fun way to teach engineering concepts, Mouse Trap combines the thrill of strategy with the satisfaction of building. This family favorite has been updated and is available from major retailers and on Amazon for $21.28.
Advertisement
Advertisement
Credit: Mario Ruiz/ The Chronicle Collection via Getty Images
Invented by George Lerner and launched by Hasbro in 1952, Mr. Potato Head was originally sold as a kit of 30 plastic parts meant to be inserted into real potatoes. It became the first toy ever advertised on television, earning the company more than $4 million in its first few months and spurring other toy companies to start their own television marketing campaigns. In 1953, Mrs. Potato Head was introduced, and by 1964 the toy had evolved to include a plastic potato body with holes for facial features and limbs. Available at most toy retailers, Mr. Potato Head and Mrs. Potato Head, along with a Potato Head family-of-three kit, are also available on Amazon for $5.00 to $19.99.
Invented in Paris by electrician André Cassagnes in the late 1950s, the Etch A Sketch used a unique internal system of a stylus and pulleys to create an electrostatic charge that would hold aluminum powder to the glass screen. Discovered by the Ohio Art Company at the 1959 Nuremberg Toy Fair, the mechanical drawing toy was launched during the 1960 holiday season and quickly became a symbol of mess-free creative play. Two large knobs control the vertical and horizontal movements of the internal stylus, allowing users to draw images as simple or as elaborate as they like. This red-framed classic continues to offer a screen-free way to entertain and challenge the imagination. Available at most toy retailers, including Amazon for $23.99, the Etch A Sketch also comes in a mini version.
The building kits known as Lincoln Logs were first conceived in 1916 by John Lloyd Wright, the son of architect Frank Lloyd Wright. Inspired by his father’s earthquake-proof design for the Imperial Hotel in Tokyo, John began marketing his toy cabin construction kit in 1918 and received a patent two years later. He named the productLincoln Logs for Abraham Lincoln’s boyhood log cabin in Kentucky. This classic American toy is still widely available from toy retailers such as Fat Brain Toys, which offers the 268-piece Lincoln Logs Classic Farmhouse for $129.95. A smaller 111-piece set is also available on Amazon for $45.07.
One of America’s oldest board games, The Game of Life — or Life, as it’s typically known — started its own life in 1860 as The Checkered Game of Life. The first board game invented by Milton Bradley, it originally included squares for disgrace, intemperance, poverty, and ruin as it guided players on a morality journey where success or failure was based on decisions made along the way. To celebrate its centennial in 1960, the Milton Bradley Company released an updated version of the game, changing the name to The Game of Life and shifting the focus from moral lessons to a modern life journey through experiences such as college, marriage, career choices, and family. The most recent updates to The Game of Life include the ability to adopt pets and an impressive 31 career options, and the game rules encourage players to “choose a path for a life of action, adventure, and unexpected surprises.” You can spin the wheel on the contemporary version of The Game of Life for $21.99 at Amazon.
The number 13 has long been considered unlucky in many Western cultures. Even today — in a world far less superstitious than it was in the past — a surprising amount of people have a genuine, deep-rooted fear of the number 13, known as triskaidekaphobia. For this reason, many hotels don’t list the presence of a 13th floor (Otis Elevators reports 85% of its elevator panels omit the number), and many airlines skip row 13. And the more specific yet directly connected fear of Friday the 13th, known as paraskevidekatriaphobia, results in financial losses in excess of $800 million annually in the United States as significant numbers of people avoid traveling, getting married, or even working on the unlucky day.
But why is 13 considered such a harbinger of misfortune? What has led to this particular number being associated with bad luck? While historians and academics aren’t entirely sure of the exact origins of the superstition, there are a handful of historical, religious, and mythological matters that may have combined to create the very real fear surrounding the number 13.
The Code of Hammurabi was one of the earliest and most comprehensive legal codes to be proclaimed and written down. It dates back to the Babylonian King Hammurabi, who reigned from 1792 to 1750 BCE. Carved onto a massive stone pillar, the code set out some 282 rules, including fines and punishments for various misdeeds, but the 13th rule was notably missing. The artifact is often cited as one of the earliest recorded instances of 13 being perceived as unlucky and therefore omitted. Some scholars argue, however, that it was simply a clerical error. Either way, it may well have contributed to the long-standing negative associations surrounding the number 13.
The idea of 13 being unlucky may have originated with, or at least have been bolstered by, a story in Norse mythology involving the trickster god Loki. In this particular myth, 12 gods are having a dinner party at Valhalla when a 13th — and uninvited — guest arrives. It is the mischievous Loki, who sets about contriving a situation in which Hoder, the blind god of darkness, fatally shoots Balder the Beautiful, the god of joy and gladness, with an arrow. It’s possible that this ill-fated myth helped cement the number’s connection to chaos and misfortune in Nordic cultures, and in Western civilization more widely.
Advertisement
Advertisement
Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
The Last Supper
Christianity has also helped fuel the superstition surrounding the number 13. In the New Testament — as in Norse mythology — there is a fateful gathering centered around a meal, in this case the Last Supper. At the dinner, Jesus Christ gathers with his Twelve Apostles — making 13 attendees in total. Judas Iscariot, the apostle who betrayed Jesus, is often considered to have been the 13th guest to sit down at the Last Supper, which might have contributed to the number’s negative connotation. This, in turn, may have led to the notion of Friday the 13th being a day of misfortune or malevolence, as the Last Supper (with its 13 attendees) was on a Thursday, and the next day was Friday, the day of the crucifixion.
It’s also possible that 13 gained a bad reputation because of the squeaky-clean nature of the number 12. In Christian numerology, 12 symbolizes God’s power and authority and carries a notion of completeness (a concept also found in pre-Christian societies). Its neighboring numeral may have suffered as a result, being seen as conflicting with this sense of goodness and perfection, further adding to the potent and enduring idea that the number 13 is unlucky.
Advertisement
Advertisement
6 Famous Members of the Skull and Bones Secret Society
In 1832, Yale University students William Huntington Russell and Alphonso Taft co-founded “The Order of the Skull and Bones,” a secret society that has gone on to become one of the most elite organizations of its kind in the United States. For almost two centuries, Skull and Bones has been a subject of much fascination, speculation, and suspicion. Its members have included some of the most influential and powerful figures in American history — including three U.S. presidents — and its secrecy has fueled numerous conspiracy theories and rumors about the society’s true nature and purpose.
Over the years, several strange secrets about Skull and Bones have been revealed. According to some accounts, new members are — or once were — made to lie naked in a stone coffin while describing their most intimate secrets and experiences. And the society’s headquarters — a stark, windowless brownstone building in New Haven, Connecticut, called “The Tomb” — is rumored to house a number of macabre artifacts, including the skulls of the Apache warrior Geronimo and the Mexican revolutionary Pancho Villa. Perhaps of greater import to the Bonesmen and Boneswomen, as initiates are known (women were granted membership in 1992), is the promise that all members are guaranteed lifelong financial stability — in exchange, of course, for their absolute loyalty and secrecy.
Despite this secretive nature, many prominent individuals have been identified as members of Skull and Bones. (Up until 1971, the society published an annual membership register.) Here are six of the most influential known members of the secret society.
William Howard Taft, 27th President of the United States
William Howard Taft served as president of the United States from 1909 to 1913, and later as chief justice of the United States — he is the only person to have held both positions. The young Taft was initiated as a Bonesman in 1878, which was no surprise as his father, Alphonso Taft, was the society’s co-founder. It’s hard to say how much bearing Skull and Bones had on Taft’s career, but he rose rapidly after Yale and was appointed a judge while still in his 20s. He became a federal circuit judge at 34, and in 1900 was sent to the Philippines by President William McKinley to serve as chief civil administrator — a political position that set him on the path to the White House.
Walter Camp, often referred to as the “father of American football,” was initiated into Skull and Bones in 1880. He was a college football player and coach at Yale, during which time he played a pivotal role in shaping the rules and strategies of the game. Camp’s changes included the introduction of the quarterback role, reducing the team size to 11 from 15, and replacing the traditional scrum of British rugby with the scrimmage. He served on Yale’s athletic committee for nearly 50 years, influencing not just football but collegiate athletics as a whole, and is widely considered his generation’s most influential champion of athletic sports.
Henry Luce was a hugely influential publisher who founded Time magazine, Life magazine, Fortune, and Sports Illustrated. Before he became one of the most powerful figures in the history of American journalism, Luce was a member of Skull and Bones. He was initiated in 1920 alongside his best friend Briton Hadden, with whom he co-founded Time in 1923. Hadden died six years after Time was first published, leaving Luce in sole control of the magazine. As with all new members of Skull and Bones, Luce was assigned a secret name — in his case, “Baal.” Many Bonesmen were given names from literature, myth, or religion, such as Hamlet, Uncle Remus, Sancho Panza, Thor, and Odin.
Skull and Bones has had its fair share of scientifically minded members, including climatologist William Welch Kellogg and physicist John B. Goodenough (recipient of the 2019 Nobel Prize for chemistry). Then there was Lyman Spitzer, a theoretical physicist and astronomer who joined Skull and Bones in 1935. Spitzer made significant contributions to several fields of astrophysics, including research into star formation and plasma physics. He was also the first person to propose the idea for a space-based observatory, which he detailed in his 1946 paper “Astronomical Advantages of an Extra-Terrestrial Observatory.” His idea later became reality in 1977, when NASA, along with the European Space Agency, took the concept and began developing what became the Hubble Space Telescope.
George H.W. Bush, 41st President of the United States
George H.W. Bush was a notable student during his time at Yale. He was accepted into Phi Beta Kappa, a prestigious academic honor society; he captained the Yale baseball team that played in the first two College World Series; and he was a member of the Yale cheerleading squad. He was a worthy candidate for Skull and Bones, which he was initiated into in 1948. Later, of course, Bush became the second Bonesman to occupy the Oval Office, when he was sworn in as the 41st president of the United States in 1989. Bush wasn’t the first of his family to join Skull and Bones, though. His father, U.S. Senator Prescott Bush, was initiated in 1917, while his uncle George Herbert Walker Jr. joined a decade later. Bush’s son, George W. Bush (the 43rd president of the United States), continued the family tradition when he too was initiated in 1968.
On the other end of the political spectrum, plenty of Bonesmen have gone on to become members of the Democratic Party, the most famous of which is John Kerry, initiated in 1966. Prior to serving as secretary of state under Barack Obama, Kerry was the Democratic nominee for president in the 2004 election — and his opponent was none other than fellow Bonesman George W. Bush. When Kerry was asked what he could say about the significance of both him and Bush being Skull and Bones members, he simply and dutifully replied, “Not much, because it’s a secret.”
Credit: Arthur Rothstein/ Hulton Archive via Getty Images
Author Nicole Villeneuve
October 3, 2024
Love it?254
Covered bridges are an idyllic symbol of rural America. These charming, often hand-built structures have been romanticized in popular culture for years, from Thomas Kinkade’s painting “The Old Covered Bridge” to the novel (and film adaptation) The Bridges of Madison County. Despite their dispensability in the age of concrete and steel, these old wooden bridges continue to be beloved landmarks, their distinct roofs making them easily recognizable even today. But what exactly led to their proliferation in decades past?
A covered bridge is exactly what its name suggests: a bridge with a roof and enclosed sides, typically constructed from wood. The reason for the covering is quite simple. While there are some theories — most likely with some truth to them — that the roofs were added to keep animals calm above rushing water, or to provide shelter for travelers, the real purpose was much more practical. Wooden bridges, which were common in the U.S. and Europe in the 18th and 19th centuries due to the abundance of timber, deteriorated quickly when exposed to the elements. Rain, snow, and sunlight caused the wood to rot or warp, compromising the materials’ integrity and reducing the lifespan of the bridge. Covering the structure protected the wooden framework and deck. By keeping the timber dry, the bridge’s life could be extended by decades. Uncovered wooden bridges might last just 10 to 20 years, whereas some of America’s original covered bridges, such as the Hyde Hall Bridge in New York’s Glimmerglass State Park, remain intact almost 200 years after being built.
Credit: Historical/ Corbis Historical via Getty Images
Simply having a roof doesn’t necessarily make a structure a true covered bridge, though. Underneath every authentic covered bridge is its truss system, a network of beams, often in the shape of triangles, that distributes the weight of the bridge and the load it carries on its deck. The trusses, though rugged in appearance, require precision, and building one often took a whole village — quite literally. Dozens, if not hundreds, of skilled workers from the community were involved: sawyers to prepare the rough-cut logs, timber framers to properly place the beams, and stonemasons to build the abutments, to name a few.
While the bridge coverings were primarily a form of protection, they also became symbols of, and important to, the communities that built them. They served as gathering places and even inspired local lore — such as the tradition of couples sharing a covert kiss under the roof, inspiring the name “kissing bridges.”
Covered bridges began appearing in the United States in the early 1800s; one of the earliest and most famous examples was Philadelphia's Permanent Bridge, built by architect Timothy Palmer over the Schuylkill River in 1805. By the mid-19th century, covered bridges were a common sight in the American countryside; estimates suggest that as many as 10,000 were built by the peak of their popularity in the 1870s. Though they’ve become emblematic of bucolic Americana, they weren’t unique to the U.S. In ancient China, for instance, covered bridges — known as corridor bridges — served as multifunctional structures, housing community events and shops, or providing a place to rest. Similarly, covered bridges in Switzerland, such as the artwork-adorned Kapellbrucke (or “Chapel Bridge”) in Lucerne, have been around for centuries and remain admired for their intricate designs and historical significance.
While covered bridges were once a common sight across the American landscape, fewer than 1,000 remain today. Despite the protection and reinforcement a covering offered, it wasn’t always enough in the face of floods, fires, or neglect over time. Remaining structures in states such as Pennsylvania, Vermont, and Indiana continue to be preserved and restored, connecting travelers not only to the other side of the river, but to the past.
Advertisement
Advertisement
When Did We Start Giving Each Other Wedding Rings?
In weddings around the world, exchanging rings is a crucial part of the ceremony, a moment in which a couple’s promises are sealed with a tangible token. This simple piece of jewelry does a lot of heavy lifting: It acts as a symbol of love, unity, and eternity, while also making our relationship status clear to the world. Various cultures have contributed to the history of the wedding ring, from its ancient beginnings to the relatively recent advent of the double-ring exchanges popular today. But when and how exactly did this time-honored tradition begin?
It’s believed the ancient Romans were the first people to use wedding rings in a way resembling the modern custom, although exchanging rings as symbols of eternity or affection dates back even earlier to ancient Egypt and Greece. Roman weddings were not like the elaborate, picturesque affairs of today, however; marriages were often less about romance and more about family alliances and property. After a marriage contract was signed and a feast was had, there was a procession to the couple’s new home, where the bride was carried over the threshold. It was then that the groom presented the bride with a ring — not just as a gesture of affection, but as a public acknowledgment of their bond and a sign that she was now a part of his household. Romans first used copper and iron for the bands, but they began to favor gold after around the third century CE. In wealthier households, brides often had both: one ring, usually made of iron, to wear at home, and another fancier gold ring to present to the public.
The wedding ring was worn on the fourth finger of the left hand, a custom based on the belief that a vein — known as the vena amoris, or “vein of love” — connected this finger directly to the heart. This tradition may have originated in ancient Egypt, where rings were seen as symbols of eternity; the ring’s circular shape, with no beginning and no end, made it a powerful representation of infinity. While the vena amoris has since been proved anatomically incorrect, the symbolic ring placement on the left hand’s fourth finger remains customary. Though the Romans were the first to formalize the use of rings in a wedding ceremony, it’s believed they took a cue from the ancient Greek and Egyptian cultures. After Alexander the Great of Macedonia conquered Egypt in 332 BCE, the Greeks adopted the custom of giving rings as a sign of love — these tokens often featured motifs of Eros, the Greek god of love, known as Cupid in the Roman pantheon.
By the medieval period in Europe, the Christian church introduced more structured wedding rituals, including the presentation of a wedding band as part of a sacred union performed by a priest. But rings were still just for the bride. Interestingly, men did, for a time, wear engagement rings, long before they started wearing wedding rings. Gimmel rings, popular during the Renaissance era, consisted of two interlocking bands that were separated and worn individually during an engagement period, and then put back together to be worn as one band by the bride after marriage. But the one-sided wedding ring exchange persisted for centuries, all the way until the Second World War.
In the 1940s, as family values and stability were emphasized in the uncertainty of World War II, marriage rates soared in the United States. Jewelers jumped on the chance to promote men’s wedding rings — and it worked. During World War II, many deployed men wore wedding rings as a comforting reminder of their wives and families back home. By the late 1940s, about 80% of U.S. couples gave rings to each other during their wedding ceremonies, compared to just 15% at the end of the Great Depression. Social norms began to change, too, and as marriage became increasingly viewed as a partnership of equals versus an exchange of property, double-ring ceremonies became the norm, and remain so today.
The first inhabitants of what is now the United States appeared around 15,000 to 20,000 years ago — a blip in time compared to the annals of some of the earliest places humans lived. Initially, population growth was slow due to the continent’s geographic isolation; significant increases began only after Europeans made their way to the Americas throughout the 16th and 17th centuries. By the 20th century, the U.S. population was experiencing rapid expansion — a trend that has slowed in recent years. Here’s a look at America’s changing population through history, from early prehistoric arrivals to the decline we’re seeing today.
The North American continent was inhabited by prehistoric humans, although they arrived much later than humans in other parts of the world. While early human species have been around for millions of years, the first people didn’t make their way to North America until sometime between 20,000 BCE and 13,000 BCE. It’s believed they traveled via the Bering Land Bridge from modern-day Siberia to Alaska, although exactly when and how they first arrived is still a matter of debate. The number of people who were around in this era is debated as well, and while estimates vary, it’s believed some 230,000 people were living in America by 10,000 BCE.
By 1 CE, an estimated 640,000 people were living in what is now the United States. Indigenous peoples developed agricultural practices that helped to define their communities, especially along the Mississippi watershed. By 1100 CE, a settlement known as Cahokia, located across the Mississippi River from modern-day St. Louis, was home to about 20,000 people. The population continued to grow throughout the land, and by 1400, there were an estimated 1.74 million people in the modern-day U.S.
When Christopher Columbus landed in the Caribbean in 1492, that number had increased by nearly 150,000 people. But in the years that followed, as other European explorers began to map and claim parts of North America, waves of disease, displacement, and conflict had a major effect on the population. Within 100 years of the first European landing in the New World, an estimated 85% to 90% of the Americas’ Indigenous population was wiped out. In the modern-day United States alone, the population dropped almost 60% between the years 1500 and 1600, from 1.89 million to 779,000.
By the early 1600s, Spanish and English explorers had established permanent settlements in what is now St. Augustine, Florida, and Jamestown, Virginia, and by the mid-1600s, the Pilgrims had established themselves in modern-day Massachusetts. By 1700, the population of the colonies had grown to an estimated 250,000. Official population estimates at the time did not include Indigenous peoples, an omission that wasn’t corrected until the late 19th century; scholars who later attempted to include Indigenous populations in the count put it closer to 900,000. In 1776, when the U.S. gained independence from England, the known population had surged to approximately 2.5 million — but the new country was about to undergo even more transformation.
The period of time from the American Revolution through the end of World War II saw explosive population growth. In 1790, the first official U.S. Census Bureau counted 3.9 million Americans living in the country. This era coincided with the start of the Industrial Revolution, a transformative time in the Western world. Technological innovation not only marked a societal shift from agrarian to industrial economies, but also spurred rapid urbanization. The era saw improvements in working conditions, sanitation, and medical care, too, making life expectancy longer than ever before. Ten years after the first U.S. census, in 1800, the population had shot up by almost 1.4 million people to reach 5.3 million. That surged to 23 million by 1850, fueled largely by a wave of European immigration to the U.S. By 1900, the country was home to 76 million people, a number that also reflected the country’s Indigenous residents. America’s population continued to grow during the early 20th century, reaching about 148 million by 1945.
The post-World War II era saw a dramatic increase in birth rates — a trend that famously became known as the baby boom. The sharp rise was due to a number of factors, key among them being economic prosperity, soldiers returning from war, and a cultural emphasis on family life. By the time the boom tapered off in 1964, some 76 million babies had been born, and these new citizens made up almost 40% of the country’s population of almost 197 million.
Today
In 2024, the U.S. Census Bureau counted America’s population at 337 million people. And while that’s an all-time high, the rate of growth has been slowing down. The baby boom was followed by a period of lower birth rates, which remain on a downward slope. Meanwhile, an aging population means death rates are projected to meet or exceed births.
The postwar period also saw changes in U.S. immigration policy, including the Immigration and Nationality Act of 1965, which did away with previous immigration quotas, opening the doors to more new Americans. Immigration has been the main driver of the country’s population growth since 1970, and that trend is expected to continue. In 2023, the Census Bureau projected that sustaining diverse immigration will help to balance the effects of an aging population. Still, despite a projected global population increase of nearly 2 billion people in the next 30 years, the U.S. population is expected to peak at around 370 million by 2080, then decline slightly to 366 million by 2100.
Much like fashion, the cyclical nature of baby names is influenced not only by cultural shifts, but also by historical events and popular media. For instance, in 1931, the name Bella was ranked No. 985 in the top 1,000 female names by the Social Security Administration, which uses Social Security card application data to determine the popularity of names, before falling off the list entirely for 69 years. We can’t be sure why the name made the list again in the year 2000, coming in at No. 749, but its rapid rise in popularity from there can be attributed to Bella Swan, the central character in Stephenie Meyer’s Twilight series, published between 2005 and 2008. Bella jumped in popularity to No. 122 in 2008, then to No. 58 the following year and No. 48 in 2010. The name remained on the list of the top 100 most popular female names through 2022, a trend bolstered by the film adaptations of the Twilight books.
While some popular names fade away only to come surging back many years later, others are perennial favorites decade after decade. Michael has been the No. 1 most popular male name for 44 of the past 100 years. On the female names list, Mary has taken the top spot 32 times and ranks as the overall most popular name of the past 100 years, despite falling as low as No. 135 over the years. The name James maintains the top spot for the most popular male name of the past century, though it has ranked as low as No. 19.
Old-fashioned names such as Harriet and Amos may make us think of our grandparents and a bygone era, but there is always a chance they’ll make a trendy comeback alongside more contemporary names such as Onyx, Anakin, and Nova (some of the top baby names in 2024). Here is a nostalgic look at eight vintage baby names that were once widely popular but have faded in use — at least for now.
Doris just squeaks onto the list of the 100 most popular names of the past 100 years at No. 98, despite the fact that it never cracked the top five in any year of the past century. Peaking at No. 6 in 1929, Doris didn’t even make the top 1,000 names in 2023. Doris Day, born Doris Kappelhoff in 1922, is arguably the most famous Doris of the past century. A popular singer and actress in the 1950s and ’60s, her wholesome girl-next-door image contrasted with the cultural shifts of the 1960s, which may account for the name’s decreasing popularity in the decades since.
Albert is an example of a male name that seems old-fashioned and outdated in the U.S., but is still going strong in the U.K. This might have something to do with the number of royals who have had the name — in the past 200 years, there have been 12 members of the British royal family named Albert. The popularity of Albert peaked in the U.S. in 1910, when it was the 14th most popular male name. It ranked at No. 590 in 2023 in the U.S., while in the U.K., Albert ranked 76th on the list of the top 100 names for boys in 2024.
Some old-fashioned names have never scored high on the popularity lists, but still consistently ranked in the top 50 or top 100 names for several decades. In a list of the top five female names in each year of the last century, Judith appears only once, as the fourth most popular name in 1940. Yet it was one of the top 50 most popular female names between 1936 and 1956. In overall rankings, Judith comes in at No. 62 in a list of the 100 top female names of the past century, but most of that popularity came in the first half of the century; the name hasn’t cracked the top 100 since the 1960s.
Credit: Lloyd Arnold/ Archive Photos via Getty Images
Ernest
The name Ernest has had a long and popular history in the U.S., peaking at No. 21 on the list of the 1,000 most popular male names in 1885. It stayed in the top 100 until 1957 before a slow but steady drop saw it falling off the top 1,000 list entirely by 2019. Despite the name’s literary connections, including Ernest Hemingway and Oscar Wilde’s The Importance of Being Earnest, the decline of Ernest might have been due to its association with the fictional character Ernest P. Worrell, played by actor Jim Varney. Varney’s bumbling slapstick humor in television commercials, a TV series, and several feature films made the character a household name — though not one that parents wanted to give their babies.
In 1920, Mildred was the sixth most popular female name in the United States, but by 1985 it wasn’t even in the top 1,000. Mildred Pierce, a 1941 novel by James M. Cain that was adapted for film in 1945, may have helped sustain the name’s popularity well into the 1950s before it began a steady downward trend. While this old-fashioned name doesn’t seem poised for a comeback, the diminutive Millie ranked as the 102nd most popular female name in 2023 — its highest rank ever, thanks to the popularity of British actress Millie Bobby Brown.
From 1900 to 1963, Ralph consistently remained on the list of the top 100 most popular male names, peaking at No. 21 in 1917. From there, it drifted further and further down the list before making its final appearance in 2018 at No. 950. The decline in the name’s popularity likely had something to do with the unfortunate fact that the word “ralph” became a U.S. slang term for vomit in the mid-1960s.
Despite being the 23rd most popular female name in 1900 and staying in the top 100 until 1930, the name Gertrude completely vanished from the top 1,000 names after 1965. The sharp decline of this old-fashioned name likely followed the same trajectory of Doris, falling out of favor as the cultural revolution swept the country, and names such as Lisa, Kimberly, and Michelle rose in popularity.
Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images
Virgil
The name Virgil is best known as the English name of Publius Vergilius Maro, the influential first-century Roman poet who penned the epic Aeneid. While the name has never been high on the list of the most popular male names, it cracked the top 100 names five times in the early 1900s and stayed in the top 500 until the mid-1970s. Virgil made its final appearance on the list of the top 1,000 male names in 1991 at No. 861, and some trend-watchers suggest it’s due for a comeback.
Websites such as Reddit, Quora, and JustAnswer have ushered in what TheNew Yorker recently called the “age of peak advice.” But people have long had a fondness for the old-fashioned advice column. The anonymity of the forum allows answer-seekers to sidestep embarrassment and participate in a virtual confessional. The advice column gained popularity in the U.S. in the late 1890s, catering mainly to women with a focus on social interactions, matters of the heart, and childcare. Marie Manning’s 1889 “Advice to the Lovelorn” column in the New York Evening Journal set the standard, incorporating the tone of conduct books for young women, which were popular in Britain in the 18th century, into its responses.
The majority of advice columns were written by women, but the publishing apparatus was controlled by men, leading to questions and replies that often reflected the sexist views of a patriarchal society. Countless columns reinforced the need for women to assume traditional gender roles such as marriage, homemaking, and child-rearing, while topics such as sexual orientation and adultery were rarely viewed with empathy or nuance.
During the 19th and 20th centuries, women known by the pen names Dorothy Dix, Abigail Van Buren (of “Dear Abby” fame), and Ann Landers (Van Buren’s twin sister!) became the most well-known and trusted advice-givers in America. Of course, social attitudes and customs have changed significantly over the decades, as has our understanding of science, and thus some of the advice that writers doled out seems pretty strange today. Here are five questionable tips from advice columns of yesteryear.
Although modern medicine has identified some of the root causes (pun intended) of baldness, the science behind hair loss was much more nebulous in the mid-20th century. Letter writer “B.C.D.” asked in a 1959 issue of The London Weekly Magazine why more men than women seemed to go bald. The response was a little thin: “The hair of men more commonly falls off than that of women as they become bald from the greater excitement which their pursuits occasion.” Tell that to professional football player Mack Hollins.
In an 1862 issue of The London Journal, readers were presented with a letter from “Harriet,” who was looking to find a way to “pass the dull evenings in the country.” The column dissuaded Harriet from pursuing activities such as books and music, which may have bored her, especially since she seemed cheerless. The suggestion? Science! According to the columnist, chemistry was “very popular with ladies who find time hanging heavily on their hands.” Was this response documenting an actual trend or making an inside joke that citizens of the 21st century don’t get? We may never know.
Advertisement
Advertisement
Credit: Camerique/ Archive Photos via Getty Images
Simply Ignore Your Husband
This particular piece of advice might not get the support of modern couples counselors. In a 1943 edition of The Winnipeg Evening Tribune, advice columnist Virginia Vane counseled “Mrs. S,” a happily married woman whose husband had more interest in the morning paper than connecting with her. Mrs. S explained that despite removing the curlers from her hair and “wearing a dress plus a good morning smile,” Mr. S remained unfazed and neglectful. Vane suggested a tit-for-tat response. “It might be wise to try ignoring him,” she wrote. “He’ll always read at breakfast so why don’t you ask him for the other half and read yourself. You’ll no doubt feel better.”
In the October 12, 1895 edition of Isle of Man Times, the “Advice to Wives” column prescribed nine rules for women, reinforcing the attitude that they should be selfless providers of childcare, cleaning, and meals. Like many advice columns from this era, it suggested that women were expected to put their husbands first, even at the risk of health and general happiness. “Don’t mope and cry because you are ill, and don’t get any fun; the man goes out to get all the fun, and your laugh comes in when he gets home again and tells you about it — some of it,” it stated. “As for being ill, women should never be ill.”
Today, it’s hard to imagine our entire planet populated by fewer people than we currently find in a single major city. And tens of thousands of years ago, it would have been shocking — and quite possibly terrifying — to imagine a world in which humans had built settlements as vast and crowded as those that exist today.
Population growth has, for the most part, been a long and steady process. But while it took most of human history for the population to reach 1 billion, it took only a little more than 200 additional years to hit 8 billion. Because of this rapid growth, the face of our planet and the influence that we’ve had on it have shifted massively in the last few centuries. Where it will all lead is an open question. But one thing is certain: People are currently living longer than ever before, and as things stand, the population will only continue to grow. Here’s a rundown of the world’s population throughout history, from prehistoric times to the present day.
Historians believe that around 55,000early humans walked the Earth some 1.2 million years ago. By the end of the last ice age — about 20,000 years ago — the population had risen to about 1 million members of Homo sapiens. Over the next 15,000 years, as human societies improved, the population increased more rapidly. By 5000 BCE, the world population was at least 5 million, and some estimates go as high as 20 million. But even that higher number is still less than the present-day populations of cities such as São Paulo, Shanghai, and Tokyo.
Classical antiquity — the period that saw the original Olympic Games and the first of Homer’s epic poems — began in the eighth century BCE. During this time, the global population was an estimated 50 million to 100 million. By 1 CE, the estimated population was at least 170 million. A significant chunk of this population was soon controlled by Rome. The Roman Empire reached its height in 117 CE, comprising all the land from Western Europe to the Middle East, with a total population between 50 million and 90 million people.
The Middle Ages, or medieval period, lasted from around 500 CE to 1500 CE, from the fall of the Western Roman Empire to the beginning of the Renaissance era in Europe. The greatest growth occurred in Europe during the high Middle Ages, when developments in agriculture, the rise of cities, and a decrease in invasions helped the population swell from 35 million to 80 million between 1000 CE and 1347 CE. Then disaster struck. Between 1347 and 1351, the “Black Death” (bubonic plague) ravaged the continent, killing some 20 million people — at least 30% of Europe’s entire population. It took until the 16th century for the population of Western Europe to once again reach pre-1347 levels.
Globally, the worldwide population during the medieval period is estimated to have been at least 190 million in 500 CE, rising to around 425 million by 1500 CE. One of the greatest dents in the global population outside of Europe was caused by the Mongol invasion of China. Beginning in the early 13th century under the command of Genghis Khan, the war led to the deaths of tens of millions of people — at the time, the global population was around 360 million.
The Renaissance period in Europe spanned from the 15th century to the early 17th century, and can be seen as the transition from the Middle Ages to modernity in the West. The era saw a demographic shift as European powers colonized much of the Americas. Trade routes and colonies were founded, and nations began to engage in empire building. The worldwide population grew from around 350 million in 1400 to about 545 million in 1600 — a significant amount, but nothing compared to what was to come next.
The Industrial Revolution changed nearly every aspect of human society. Previously, birth rates and death rates were both very high, keeping the global human population comparatively stable. But when the Industrial Revolution began around 1760, improvements in technology, agriculture, medicine, and sanitation brought about a massive population growth spurt. In Western Europe, with people living longer and infant mortality on the decline, the population doubled during the 18th century from around 100 million to almost 200 million, and doubled again in the 19th century to roughly 400 million. Globally, a population landmark was reached in 1804 when the number of humans on Earth reached 1 billion for the first time.
Today — and Beyond
It took many thousands of years for the human population to reach 1 billion. But once that figure was reached, the growth rate became mind-boggling — and a source of concern due to the potential overpopulation of our planet. By 1950, the global population reached an estimated 2.5 billion. Today, the number of humans stands at a staggering 8 billion — and is currently growing by more than 200,000 people each day. According to the United Nations, the global population is expected to increase by nearly 2 billion in the next 30 years, and could peak at around 10.4 billion in the mid-2080s.
History is dotted with instances of mass hysteria, a perplexing phenomenon in which large groups of people are struck by the same physical or mental affliction without any apparent explanation, from uncontrollable movement to widespread paranoia. Given the uncertainty as to what causes these curious events, contemporary doctors have remained baffled as to how to prevent or cure them. Though there are some theories, plenty of questions remain, in some cases hundreds of years after the incident took place. Let’s take a closer look at some of history’s strangest instances of mass hysteria, from the Middle Ages to the 20th century.
In 1518, the city of Strasbourg (in modern France) was overcome by a mysterious “dancing plague” that affected some 400 residents. It all began in July of that year, when a woman known as Frau Troffea began spontaneously dancing in the middle of the street. After a week of boogying solo, Troffea was joined by several dozen others who also developed the sudden urge to dance. The group only grew larger throughout the rest of the summer, expanding to several hundred people who danced until they collapsed from exhaustion, or in rare instances, suffered a fatal heart attack. Much as the event began without any explanation, the dancing epidemic a inexplicably started to wane by September, and the city returned to a state of normalcy.
Physicians at the time attributed the dancing ailment to “hot blood,” saying the only cure was for people to dance it out of their system until they no longer felt the urge. Other townsfolk believed they had been cursed by St. Vitus, the patron saint of dance, and were doomed to dance for eternity. But looking back, modern historians have several theories as to what caused the unusual event. Some believe it was induced by a combination of general stress and the side effects of new, untreated diseases such as syphilis. Another theory points to a fungus known as ergot, which is found on bread. If consumed, ergot can manifest itself in victims as spontaneous convulsions that may look like dance moves.
Between February 1692 and May 1693, more than 200 innocent people were accused of practicing witchcraft in the colonial town of Salem, Massachusetts. These accusations gave way to a mass hysterical event known today as the Salem Witch Trials, which was caused by a combination of xenophobia, religious extremism, and sexism.
The paranoia began at the house of Puritan minister Samuel Parris, whose 9-year-old daughter, Betty, along with her 11-year-old cousin, Abigail, began making unintelligible noises and convulsing. Other girls in town began exhibiting similar symptoms, claiming that it may have had something to do with being pinched and pricked by various residents who were seen to be of ill repute due to discrimination. This led to several women being accused of practicing witchcraft, including a beggar named Sarah Good, an elderly impoverished woman named Sarah Osburn, and an enslaved Indigenous woman named Tituba. While Good and Osburn maintained innocence, Tituba “confessed” to serving the devil. This confession was likely untrue and only made to avoid further punishment, but it emboldened the town to pursue further accusations.
As mass hysteria swept across Salem, dozens of women were brought before panels and tribunals for questioning, and 19 people were executed. The courts were only disbanded after the wife of Governor William Phips was accused of witchcraft, as it was Phips who had established the courts to begin with.
Advertisement
Advertisement
Credit: whitemay/ DigitalVision Vectors via Getty Images
The Hammersmith Ghost
In December 1803, a great panic struck the community of Hammersmith, a small town just outside London. Multiple locals claimed that a ghost cloaked in a white shroud had been confronting and terrorizing residents, and it was said that the ghostly specter would appear right as the church bell struck one in the morning. Locals believed the spirit was that of a villager who had died by suicide the year before, and that his tortured soul was destined to haunt the town. Residents cowered in fear at the idea of this apparition; there was even one instance where a spooked carriage driver thought he saw the ghost before abandoning his passengers and fleeing on foot. There are other reports of women fainting to death at the sight of the purported ghost, though there was no evidence that the spirit actually existed.
Eventually, some residents of the community got their hysteria in check and determined the ghost was most likely someone in a sheet who was intentionally scaring people as a prank. Others, however, maintained that a spirit was causing havoc from beyond the grave. One such believer was Francis Smith, who one evening mistook bricklayer Thomas Millwood — who wore an all-white outfit for his profession — for the supposed ghost, and shot Millwood dead in the street. Smith went on trial and was convicted of murder, yet he still garnered sympathy from townsfolk who were hysterical with fear. Smith’s sentence was reduced to one year of imprisonment, leading to a series of debates about whether someone could be held liable for a crime based on mistaken beliefs, as Smith committed the crime only after being deluded into believing there was a ghost in the neighborhood.
Mattoon, Illinois, is a small town that was overcome by widespread panic in 1944, when reports of a “Mad Gasser” swept over the community. On September 1, a woman named Aline Kearney was overcome by a “sickening, sweet odor” that resulted in the temporary paralysis of her legs. Kearney’s husband arrived at the house shortly after and claimed to see a tall man wearing dark clothing and a tight-fitting cap standing outside the window, whom he chased until the mysterious man disappeared. Kearney’s condition returned to normal after 30 minutes, but this was just the first of many similar disturbances to come. After word spread of the Kearney incident, other townsfolk claimed they also suffered paralytic symptoms after smelling unusual odors. The panic grew to the point where chemical weapons experts were brought into the community. Armed gangs also began roaming the streets to try to find the “Mad Gasser” and bring them to justice.
There were many theories as to the identity of the mysterious menace, ranging from a chemistry teacher to an escaped Nazi prisoner of war. But in the end no assailant was ever found, and it’s highly unlikely that one ever even existed. Many of the anecdotal gassing incidents were fueled by mass hysteria, as one woman was simply overcome by odors from a spilled bottle of nail polish. It’s also theorized that years of wartime-related stress, combined with reports of chemical weapons being used overseas, added to the overall sense of communal anxiety in Mattoon.
Laughter is the best medicine, but it can also be a most perplexing symptom. In 1962, a laughter epidemic struck students at a girls’ boarding school in Tanganyika (now Tanzania). The mysterious giggles first appeared in January in a town called Kashasha, where three students began hysterically laughing from completely out of nowhere. Despite efforts to calm the girls, their perpetual howling was contagious, as dozens of other students also began experiencing laughing fits that lasted anywhere from a few hours to 16 days.
Doctors were unable to explain the phenomenon, and the root cause remains unclear even today. Some historians believe that the laughter was a visceral reaction to living in such a strict cultural environment, but that theory is far from certain. With no idea about how to quell unstoppable laughter, the school had no choice but to temporarily close in March, but that decision only led to more problems. As the girls returned home, they mysteriously “infected” each of their communities with the same insatiable urge to laugh uncontrollably. Laughter spread like wildfire throughout the country, leading to 14 separate school closures and affecting more than 1,000 people. It took roughly two years for the epidemic to completely dissipate, and thankfully, nobody suffered any long-term medical effects during that time.
Advertisement
Advertisement
Subscribe to History Facts
Enter your email to receive history's most fascinating happenings in your inbox each day.
Sorry, your email address is not valid. Please try again.
Your email is:
Sorry, your email address is not valid. Please try again.