Ralph Bond, The Man With A Plan, Computer America Science/Tech Trends Correspondent

January 2019 show notes

https://youtu.be/WLaG-5CoOGE

Story 1:  This robotic jellyfish is a climate spy – Squishy tentacles let this gadget monitor coral reefs without disturbing them

Source:  Science News for Students Story by Tyler Berrigan

Link:  https://www.sciencenewsforstudents.org/article/robotic-jellyfish-climate-spy

To study coral reefs and the creatures that live there, scientists sometimes deploy underwater drones. But drones aren’t perfect spies. Their propellers can rip up reefs and harm living things. Drones also can be noisy, scaring animals away. A new robo-jellyfish might be the answer.

Erik Engeberg is a mechanical engineer at Florida Atlantic University in Boca Raton. His team developed the new gadget. Think of this robot as a quieter, gentler ocean spy. Soft and squishy, it glides silently through the water, so it won’t harm reefs or disturb animals living around them. The robot also carries sensors to collect data.

The device has eight tentacles made of soft silicone rubber. Pumps on the underside of the robot take in seawater and direct it into the tentacles. The water inflates the tentacles, making them stretch out. Then power to the pumps briefly cuts out. The tentacles now relax and water shoots back out of holes on the underside of the device. That rapidly escaping water propels the jellyfish upwards.

The robot also has a hard, cylindrical case on top. This holds the electronics that control the jellyfish and store data. One component allows wireless communication with the jellyfish. That means someone can remotely steer the robot by making different tentacles move at different times. The hard case could hold sensors, too.

Engeberg’s group described its robot’s design September 18 in Bioinspiration & Biomimetics.

Natural inspiration

The researchers had practical reasons for modeling their device on jellyfish. “Real jellyfish only need small amounts of power to travel from [point] A to B,” Engeberg says. “We wanted to really capture that quality in our jellyfish.”

Jellyfish move slowly and gently. So does the robo-jelly. That’s why the researchers think it won’t frighten marine animals. What’s more, Engeberg says, “The soft body of our jellyfish helps it to monitor ecosystems without damaging them.” For example, the robot could carry a sensor to record ocean temperatures. The data it gathered could help scientists map where and when the ocean is warming because of climate change

I will skip reading this: “Jellyfish have been moving around our oceans for millions of years, so they are excellent swimmers,” says David Gruber. He’s a marine biologist at Baruch College in New York City who was not involved with the robot. “I’m always impressed when scientists get ideas from nature,” Gruber says. “Especially something as simple as the jellyfish.”

Fighting climate change motivates Engeberg and his team. “I have a deep desire to help endangered reefs around the world,” he says. He hopes his robo-jellyfish will help researchers study the otherwise hidden impacts of climate change at sea.

Tracking sea temperatures and other data can benefit people, too, by warning of worsening conditions. Warmer oceans can make storms more powerful and destructive. Warmer seawater also helps melt sea ice by eroding glaciers from below. That meltwater adds to rising sea levels. And higher seas can lead to coastal flooding, or make low-lying islands disappear altogether.

The robotic jellyfish is a work in progress. We are making a new version right now,” Engeberg says. It swims deeper and can carry more sensors than the older model. This should make it an even better spy on the conditions affecting coral reefs worldwide.

Story 2:  Robotics startup Sarcos will rollout industrial exoskeletons this year – Yes, but will it help you throw a Xenomorph through an airlock?

Source: Techspot Story by Cal Jeffrey

Link: https://www.techspot.com/news/78104-robotics-startup-sarcos-rollout-industrial-exoskeletons-year.html

https://static.techspot.com/images2/news/bigimage/2019/01/2019-01-03-image-34.jpg
The big picture: While many robotics firms are researching ways to create human substitutes, some are focusing on how to augment human capabilities. Sarcos has some industrial grade exoskeletons that it is preparing to launch before the end of 2019. There will be two models — one that is light and agile and a larger one for heavy lifting.

According to IEEE Spectrum, Sarcos Robotics is just about ready to roll out a couple new exoskeletons designed for industrial use. The Guardian XO and Guardian XO Max were built to assist factory, construction, and mine workers to boost their strength and protect them from injuries. The exosuits have been in development for nearly two decades and should be ready for implementation toward the end of 2019.

The Guardian XO weighs about 50 pounds and can lift around 77 pounds (35kg). While that is not a great effort for a human, the exoskeleton will allow its operator to lift that weight repetitively without tiring. The Guardian XO Max is a heavier unit but is capable of repeatedly lifting loads of around 200 pounds (90kg).

The suits use what Sarcos calls a “get-out-of-the-way” control system. Sensors within the suit detect the user’s movements and mimic the speed, force, and direction in the appropriate limb. This control scheme makes the exoskeletons very intuitive and require minimal training to use.

“The suit moves along with you; you don’t have to think about how to use it,” said Sarcos CEO Brian Wolff.

https://static.techspot.com/images2/news/bigimage/2019/01/2019-01-03-image-35.jpg

Both models are battery powered. The company claims that each unit can last for about eight hours per charge. Power cells can be hot-swapped as well, so there is no need for continuously operating companies to worry about downtime.

Until now the wearable robots have been confined to Sarcos’ R&D labs due to limitations in power management. However, recent breakthroughs have made the technology commercially viable.

“It’s one thing to make a very expensive robot in the lab,” said Wolff. “We’re finally at the point where the exoskeleton’s capabilities coupled with the economics make it a viable product.”

Sarcos is planning on implementing a “robot-as-a-service” business model. Companies that sign on will be provided with the exosuits and docking stations installed by Sarcos staff. The firm will also provide ongoing maintenance, repairs, and upgrades in the cost.

“[The XO package] is roughly the equivalent to a fully loaded, all costs included, $25 per hour employee,” says Wolff regarding the subscription price.

This may seem like a somewhat high fee, especially since it is more like a rental, and the company never gains equity in the equipment. However, Wolff claims that each exosuit can improve an employee’s productivity four- to eight-fold and will reduce the chance and number of on-the-job injuries.

See general company overview video here: https://youtu.be/zhLpnZSPkYs

Story 3:  This free online tool uses AI to quickly remove the background from images

Source:  MSN story by: James Vincent

Link:  https://www.msn.com/en-us/news/technology/this-free-online-tool-uses-ai-to-quickly-remove-the-background-from-images/ar-BBRaw9c?ocid=News

That’s me in outer space!

If you’ve ever needed to quickly remove the background of an image you know it can be tedious, even with access to software like Photoshop. Well, Remove.bg is a single-purpose website that uses AI to do the hard work for you. Just upload any image and the site will automatically identify any people in it, cut around the foreground, and let you download a PNG of your subject with a transparent background. Easy. 

My note: A PNG file is an image file stored in the Portable Network Graphic (PNG) format. It contains a bitmap of indexed colors and uses lossless compression

It’s the latest example of how machine learning techniques that were once cutting-edge are being turned into simple consumer tools. In the case of removing an image’s background, there are already a few open-source algorithms that can handle this particular task. Remove.bg has simply turned them (or something like them) into a free online utility.

Other similar tools include Deepart.io which applies the style of one image (like a painting) to another, and LetsEnhance.io, which uses AI to automatically upscale pictures.

Remove.bg certainly isn’t faultless . Like any magic wand tool, it gets a bit confused when faced with fuzzy boundaries between foreground and background. You can see that in the picture of Elon Musk at the top of the article [my note: see Elon images directly below] (check out his missing eye-chunk) and with this picture of Miles Davis lying on a furry bedspread:

But, it’s certainly robust enough to handle a wide range of pictures, and even though the site claims the tool only works with people, it can handle other subjects, as long as they’re clearly foregrounded.

Anyway. It’s a simple little utility that might be handy to bookmark. It saves time and can produce some pretty fun image macros without any hassle.

Story 4:  Next-wave sensor: How this tiny button could save you from sunburn

Source:  Chicago Tribune Story by Cindy Dampier

Link:  https://www.chicagotribune.com/lifestyles/sc-hlth-sun-exposure-monitor-1219-story.html

Things your phone probably reminds you of: Your mother’s birthday. Your kid’s doctor appointment. That thing Beyonce said on Twitter. The new episode of “The Romanoffs.” Email. News headlines.

Now, thanks to scientists at Northwestern University’s Center on Bio-Integrated Electronics, your phone can also tell you exactly how much sunlight your body has absorbed today, based on what you’re wearing, what the weather is and where you are physically located on the globe.

Oh, by the way — it’s time to reapply sunscreen.

This useful info comes courtesy of a tiny sensor developed by Northwestern researchers John A. Rogers and Dr. Steve Xu that can stick to your skin or clip onto your hat. “It’s smaller than a dime, thinner than a credit card,” says Xu, “and you can stick it or clip it anywhere, which allows people to customize it.”

His favorite application? Using the sensor as nail art. (Scientists love the fingernail as a vehicle for a wearable device, he says, because it’s stable, durable and can stand up to adhesives.)

The sensor is so small, Xu says, “I often forget I’m wearing it.” Yet, the device packs a lot of power and data-gathering ability: It can accurately measure UVA and UVB radiation, as well as light exposure, runs on solar power without a battery and never needs recharging.

Getting rid of the need to charge not only makes it easier to use, Xu says, “it allows the device to be even smaller, and cheaper to make.” It’s also virtually indestructible — in the lab, students dropped it into boiling water and simulated running it through a washing machine but were not able to break it.

The accompanying phone app allows users to enter information about sunscreen applied, clothing and activities (such as whether you’re in or out of the water.) “It’s really a platform technology,” Xu says, “that can measure light extremely accurately in a novel way.”

That’s important, he says, because sun exposure is the No. 1 contributor to skin cancer, which has become a growing, global epidemic.

One in five Americans will have skin cancer in their lifetimes, and that’s really pretty scary. But if you think about when we’re outside enjoying ourselves, we are just guessing about how much sun exposure we are getting, and it’s inconsistent with how much sunscreen we put on. Usually, you don’t know until the next day, when you get red with sunburn, that you got too much sun.” And every sunburn increases the chance of skin cancer. “All of that,” Xu says, “translates to an increased lifetime risk.”

The increasing, ubiquitous need for better protection from UV radiation is why there is a consumer version of the sensor, called “My Skin Track UV,” that was developed with cosmetics giant L’Oreal. It launched in November at the Apple store. You can stick it on your kid, or yourself, and get a phone alert that will warn you before soaking up the sun on your winter break vacation crosses the line into sunburn territory. (Which means you can also spare the rest of us back home that sympathetic cringe we get at the sight of your neon-red skin. Ow.)

But Xu says the device’s next version has other applications that dermatologists like him are excited about: “Light is one of the world’s oldest medicines,” he says, “and we use it to treat diseases.” These include skin diseases, seasonal affective disorder and jaundice in infants.

The new sensor is able to accurately measure light exposure that patients are getting from light therapy, so that it can be adjusted for greatest benefit. And it will allow doctors to carefully track sun exposure for skin cancer survivors.

A lot of the things that we do are driven by the problems we see in our patients,” says Xu, “and as a dermatologist, I live and breathe skin cancer.” Xu is currently testing the sensor with skin cancer patients, to further explore its clinical possibilities and practical use. It’s all about finding the best intersections, he says, between tech and medicine. “How do we connect cool technology to really meaningful problems that have impact for people? That’s the recipe for what we do.”

Story 5:  Electro-tweezers let scientists safely probe cells – They allow repeated sampling of materials from the same living cell over time

Source:  Science News for Students Story by Maria Temming

Link:  https://www.sciencenewsforstudents.org/article/electro-tweezers-let-scientists-safely-probe-cells

A new set of tools can pull individual molecules out of a living cell without killing it.

Think of it like a set of tweezers for use in the world’s smallest game of “Operation.”

Normally, sampling what’s in a cell requires breaking it open. “You basically kill the cell to get access,” says Orane Guillaume-Gentil. She did not work on the new device. This microbiologist at ETH Zurich in Switzerland is, however, familiar with the idea. With older tools, she points out, “It’s not possible to look at one cell and follow it over time.” It would be dead after the first look.

Because the new technique is so gentle, it can be used on the same cell over and over. That could show how a cell responds to growth or to things in its environment. And it might help people better understand how healthy cells work, and what goes wrong inside sick cells.

The researchers used their tweezers to extract molecules from different types of cells. First the team stained its cells with dyes. These glowed when the dyes glommed onto particular targets, such as DNA. Those target molecules would now stand out when researchers viewed them under a microscope. And that helped them guide their tweezers to extract the desired substance.

The researchers have removed DNA from human bone-cancer cells. They also entered human artery cells to nab messenger RNA. It’s a type of molecule that holds the instructions for building proteins. Extracting this mRNA from two different spots in a single cell, one hour apart, confirmed that the tweezers could be used to sample a cell more than once.

My note: Wikipedia: Messenger RNA (mRNA) is a large family of RNA molecules that convey genetic information from DNA to the ribosome, where they specify the amino acid sequence of the protein products of gene expression.   Ribosome definition: a minute particle consisting of RNA and associated proteins found in large numbers in the cytoplasm of living cells. They bind messenger RNA and transfer RNA to synthesize polypeptides and proteins.

It’s electric!

Joshua Edel is a chemist at Imperial College London in England. Key to his group’s new tool is a sharp, glass rod. Its thin tip is less than 100 nanometers across. That’s about a tenth the diameter of a red blood cell. At the end of this rod are two electrodes. Each is made of a carbon-based material, such as graphite.

When Edel’s team applies an electric voltage to the tweezers, a powerful electric field develops around the electrodes. This attracts and traps small molecules within about 300 nanometers of the rod’s tip. Once in this electric net, molecules stay put until the voltage is turned off. Only then can the molecules drift away.

And these [this] new tool can retrieve cargo bigger than a single molecule. For instance, mitochondria (My-toh-KON-dree-uh) are big structures in cells that convert nutrients into energy. Edel’s team used its nanotweezers to remove mitochondria from the nerve cells found in mouse brains.

So far, the researchers have only used their tweezers to operate on cells in petri dishes. But Edel says his team plans to test its pluckers on cells living inside growing tissues.

They described their nanotweezers online December 3 in Nature Nanotechnology.

This tool is “very powerful,” says Pak Kin Wong. He’s a biomedical engineer at Pennsylvania State University, in State College. Indeed, he notes, the new tool should make it possible to get a more detailed view of what goes on inside cells. For instance, plucking proteins and other stuff from different parts of a cell might highlight what role each plays.

Alexandra-Chloe Villani works in Cambridge, Mass. There, she studies genomics and immunology at Massachusetts General Hospital and the Broad Institute of MIT and Harvard. She believes these gentle tools might one day help in projects like the Human Cell Atlas. It aims to create unique “ID” cards for each type of cell in the human body. Each ID would describe how a particular type of cell works.

Sampling DNA from different cells might help researchers also search for random changes in genes known as mutations, Edel says. Those mutations might underlie different diseases.

Monitoring the molecular makeup of cells over time also could reveal how cells are affected by illnesses or respond to new drugs, adds Guillaume-Gentil.

Story 6:  Sony promises better face identification through depth-sensing lasers

Source:  The Verge Story by: Vlad Savov

Link:  https://www.theverge.com/2019/1/2/18164881/sony-tof-laser-depthsensing-3d-camera-report

Sony, the global leader in imaging sensors — both for smartphones and professional DSLR and mirrorless cameras — is eager to establish itself as the go-to supplier for the next generation of visual-processing chips with a set of new 3D sensors. Speaking with Bloomberg last week, [story posted January 2] Sony’s sensor division boss Satoshi Yoshihara said Sony plans to ramp up production of chips to power front and rear 3D cameras in late summer, responding to demand from multiple smartphone manufacturers. Though Yoshihara is geeked about the potential for augmented reality applications, the most intriguing aspect of this new tech would appear to be a better form of face identification than we currently have.

The Face ID approach that Apple first brought into use on the iPhone X — and others like Xiaomi, Huawei, and Vivo have since emulated — works by projecting out a grid of invisible dots and detecting the user’s face by the deformations of that grid in 3D space. Sony’s 3D sensor, on the [other] hand, is said to deploy laser pulses, which, much like a bat’s echolocation, creates a depth map of its surroundings by measuring how long a pulse takes to bounce back. Sony’s sensor chief argues this produces more detailed models of users’ faces, plus it apparently works from as far away as five meters (16 feet).

Imaging hardware has traditionally been all about photography and videography, but depth-sensing of the kind Sony is talking up for 2019 is becoming increasingly important. The Japanese giant acquired a Belgian outfit called SoftKinetic a few years ago, which was renamed to Sony Depthsensing a year ago. Now there’s an entire website dedicated to Sony’s venture into the category, with autonomous cars, drones, robotics, head-mounted displays, and of course gaming all figuring as potential applications.

In the mobile context, there’s certainly room for improvement for current face-unlocking methods. The most basic kind, such as on the OnePlus 6T, uses the selfie camera to identify the user’s face, and is thus only usable in the dark if you’re willing to flash your face with a bright light every time you unlock your phone. Apple’s Face ID and its Android rivals are all built using multiple components that demand a significant chunk of real estate at the top of the device — which is fine for larger tablets like the new iPad Pro, but stands as a big hurdle for any phone designer eager to achieve the ultimate all-screen design. Sony’s 3D sensors would be an instant winner if they prove capable of matching Face ID for accuracy and security while shrinking down the size of required parts.

In late 2017, a report emerged of Apple preparing exactly this sort of 3D laser-based system for the 2019 iPhone, though at the time the company was said to still be courting suppliers. Yoshihara wouldn’t be drawn into discussing which hardware partners Sony expects to see using its 3D sensor technology, but Sony already provides imaging sensors to Apple, so there’s a reasonable chance that these two reports find their confluence with the release of the next set of iPhones featuring Sony’s upgraded 3D-sensing chip.

Story 7:  Google’s “Project Soli” radar gesture chip isn’t dead, gets FCC approval – Soli could enable smartwatches to detect hand gestures, if it ever launches

Source:  arsTechnica Story by Ron Amedeo

Link:  https://arstechnica.com/gadgets/2019/01/googles-project-soli-radar-gesture-chip-isnt-dead-gets-fcc-approval/

Google's radar chip detects hand motions, creating a gesture control system.

See video here: https://www.youtube.com/watch?v=0QNiZfSsPc0

Google’s radar-based gesture control system for mobile devices, Project Soli, isn’t dead yet. The project, which was announced all the way back in 2015, has popped up at the FCC, where it has been approved for use in the 57- to 64-GHz frequency band.

Project Soli’s goal is to build a tiny radar system on a chip that can be used to detect hand gestures made above a device. Soli is only at the experimental stage right now, but Google usually pitches Soli as a concept control scheme for smartwatches, speakers, media players, and smartphones.

Usually the gestures shown are things like tapping your thumb and index finger together for a virtual button press or rubbing the two fingers together to scroll or turn a dial. The idea makes the most sense for tiny devices like a smartwatch, which don’t necessarily have the space for a sizable touch screen and lots of buttons. It could also have benefits for users with limited mobility.

Bottom of Form

The FCC’s decision actually lets Google use Soli at higher than the currently allowed power levels, which was apparently needed to make the chip work. Google originally wanted approval for a power level in line with the European Telecommunications Standards Institute standards but was talked down, oddly enough, by Facebook, which was concerned about interference issues. Facebook is interested in 60GHz broadband through its “Terragraph” project.

The FCC said the decision “will serve the public interest by providing for innovative device control features using touchless hand gesture technology.”

ATAP’s sketchy track record

The biggest news to come out of the FCC documents is just that Project Soli isn’t dead. The project is being developed at Google’s “Advanced Technology and Projects” (ATAP) division, which is infamous for announcing important-sounding projects that never see the light of day.

Google’s ATAP division was started by Regina Dugan, the former head of DARPA, back when Google owned Motorola. The group was created as a mobile-centric skunkworks with projects focused on a two-year timeline, after which they were supposed to be spun out of ATAP as a standalone project or shut down. The group has generated a lot of press coverage but very little in the way of impactful product launches. Usually ATAP’s projects are announced, hyped up, and then quietly cancelled years later or are never heard from again. Given that Dugan once said ATAP is supposed to be “unafraid of failure,” it seems that the low hit rate is supposed to be part of the group’s design.

There was Project Ara, a scheme to build modular smartphones, which was developed for three years, frequently delayed, and eventually cancelled in 2016. “Project Abacus” was a crazy smartphone authentication method announced in 2015 that aimed to use every sensor in a phone—the front camera, microphone, GPS, touchscreen usage, and more—to continually authenticate a user with a “trust score,” alleviating the need for a password. The idea sounded like a huge battery hog, but we will never know since the project was quietly killed at some point, or at least, it was scrubbed from ATAP’s site and hasn’t been mentioned for years. Project Vault was an SD card with a secure computing environment onboard, requiring separate authentication to access. Vault was another 2015 project that quietly disappeared, but there is a dormant repository here.

In terms of ATAP projects that actually saw the light of day, there was Project Jacquard, which saw Google team up with Levis to make a $350 jean jacket with a small touch panel in the sleeve. The most successful outing was probably Project Tango, which was a 3D-sensing smartphone loaded with specialized sensors. Tango was eventually cancelled, with much of the software work rolled into Android’s ARCore, which employs a more limited AR feature set using standard smartphone hardware. If you want to count software (every other ATAP project is hardware-focused), the group produced Google Spotlight Stories, which is a 360 video format.

Besides the already-launched Jacquard and Spotlight Stories, Soli is the last remaining project on ATAP’s website. Supposedly, a few developer kits were offered to a select group in 2016, but since then we haven’t heard much from the project. Will we ever see it in a commercial device?

Story 8:  Supersonic passenger jet firm raises $100 million, aims for 2019 test flights – Company says it remains on track for mid-2020s delivery of new planes

Source:  arsTECHNICA Story by Eric Berger

Link:  https://arstechnica.com/science/2019/01/supersonic-passenger-jet-firm-raises-100-million-aims-for-2019-test-flights/

Founded in 2014, Colorado-based Boom Supersonic says it has been making steady technical development toward the return of supersonic civilian travel. Now it has some considerable resources, as well. On Friday, [story posted January 4] the company announced that it has raised $100 million in Series B financing, bringing its total funding to more than $141 million.

“This new funding allows us to advance work on Overture, the world’s first economically viable supersonic airliner,” said Blake Scholl, founder and CEO of Boom Supersonic, in a news release.

Boom said it now has a workforce of 100 people and expects to double that this year. Moreover, the company has begun assembling a one-third scale prototype of its planned commercial airliner. This XB-1 vehicle could take flight later this year, with Chief Test Pilot Bill “Doc” Shoemaker at its controls. Boom intends for the prototype to validate its concepts for efficient aerodynamics, advanced composite materials, and an efficient propulsion system.

Current commercial airliners travel at an average cruising speed of about 900km/hour, or about 75 percent the speed of sound. By contrast, Boom envisions its Overture airliner traveling at Mach 2.2, which is about 10 percent faster than the Concorde traveled. The company says a flight from New York to London would take about 3 hours and 15 minutes, and Sydney to Los Angeles would take 6 hours and 45 minutes.

One of the key innovations Boom hopes to demonstrate is reaching a high rate of speed without injecting additional fuel into the jet pipe after the turbine, a process known as afterburning. This increases thrust but substantially decreases fuel efficiency. The Concorde used afterburning during take off and acceleration; Boom hopes to ditch afterburning with better aerodynamics, materials, and propulsion.

Mid-2020s

The additional funding keeps Boom Supersonic on track with the development of the full-size Overture airliner, the company said. Its planes could be ready for commercial service in the mid-2020s, and Boom said Virgin Group and Japan Airlines have preordered a combined 30 airplanes.

Initially, the company said roundtrip flights would cost about $5,000. Now, on its website, it says, “Final ticket prices will be set by airlines, but we are designing the aircraft so that airlines can operate profitably while charging the same fares as today’s business class. Our ultimate vision is to reduce operating costs to make supersonic flight even more affordable and accessible.”

The envisioned 55-seat Overture aircraft has a similar appearance to the Concorde from the outside, with a sleek nose and delta-wing shape. The key question is whether the company can design and develop the Overture vehicle for a reasonable amount of money, and then, whether commercial airliners can fly it profitably. It is not clear how much it will ultimately cost to bring the Overture vehicle into service or how close the new round of funding gets Boom to this goal (we have reached out to the company). Concorde development costs were about $7 billion in present-day dollars. Boeing and Airbus spent an estimated $2.5 billion to $6 billion to bring some of their larger commercial jets to market.

Civilian supersonic service has a checkered record. The Soviet Tupolev supersonic aircraft flew just a few dozen commercial flights back in 1977, and the Concorde, flown by British Airways and Air France beginning in 1976, retired in 2003 after a fatal accident three years earlier that compounded economic problems.

The significance of Friday’s announcement is that it suggests that several key investors believe Boom Supersonic has a plausible technology roadmap, as well as a credible business case. The funding round was led by Emerson Collective and includes Y Combinator Continuity, Caffeinated Capital, and SV Angel. Individual investors include Sam Altman, Paul Graham, Ron Conway, Michael Marks, and Greg McAdoo.

For full show notes, check out ComputerAmerica.com!