Ralph Bond, Science/Tech Trends, Talks Robots, AI, and More!

For more about me, see: https://ralphbond.wixsite.com/aboutme

Story 1:  Electric air taxi startup Lilium completes first test of its new five-seater aircraft

Source:  The Verge Story by Andrew J. Hawkins

Link: https://www.theverge.com/2019/5/16/18625088/lilium-jet-test-flight-electric-aircraft-flying-car

https://img-s-msn-com.akamaized.net/tenant/amp/entityid/AABqZTY.img?h=630&w=1200&m=6&q=60&o=t&l=f&f=jpg&x=960&y=591

See video here: https://www.youtube.com/watch?v=8qotuu8JjQM

German air taxi startup Lilium announced the first test of its full-scale, all-electric five-seater aircraft. It was the latest in a series of successful tests for the nascent electric flight industry, which aims to have “flying cars” whizzing above cities within the next decade.

In a video provided by the Munich-based company, Lilium’s unpiloted aircraft can be seen taking off vertically like a helicopter, hovering briefly, and then landing. It may not seem like much, but it’s a big step for the company, which hopes to launch a fully operational flying taxi service in multiple cities by 2025.

Compared to the other preproduction electric aircraft we’ve seen so far, the Lilium Jet certainly stands out: it has an egg-shaped cabin perched on landing gear with a pair of parallel tilt-rotor wings.

The wings are fitted with a total of 36 electric jet engines that tilt up for vertical takeoff and then shift forward for horizontal flight. There is no tail, rudder, propellers, or gearbox.

When it’s complete, the Lilium Jet will have a range of 300 kilometers (186 miles) and a top speed of 300 km / hour (186 mph), the company says.

That’s much farther than many of its competitors are predicting of their electric aircraft.

Remo Gerber, Lilium’s chief commercial officer, said this was due to the Jet’s fixed-wing design, which requires less than 10 percent of its maximum 2,000 horsepower during cruise flight.

The company has conducted test flights before. Back in 2017, Lilium announced the first test flight of its all-electric two-seater vertical take-off and landing (VTOL) prototype. But while the prototype was able to demonstrate the shift from vertical to forward flight, the full-scale Lilium jet did not.

The power-to-weight ratio is a huge consideration for electric flight. It’s also one of its biggest inhibitors. Energy density — the amount of energy stored in a given system — is the key metric, and today’s batteries don’t contain enough energy to get most planes off the ground. To weigh it out: jet fuel gives us about 43 times more energy than a battery that’s just as heavy.

Gerber wouldn’t provide any details about the Lilium Jet’s weight capacity, but he insisted that it will eventually be able to carry five passengers and a pilot, plus luggage. Lilium’s “payload ratio is industry-leading, and that’s what is going to make the difference,” he said.

Unlike some of its competitors, Lilium plans to keep a human pilot on board its aircraft. This will enable an easier certification process, Gerber said. Lilium is in the process of securing certification for the five-seat air taxi from the European Aviation Safety Agency, and it will also seek an application with the US Federal Aviation Administration.

Gerber had more to say about the company’s business model, which includes an app-based, on-demand feature where customers can book a flight via a smartphone app, a la Uber. Think midtown Manhattan to JFK International Airport in under 10 minutes for $70. (Currently, a company called Blade, which bills itself as “Uber for helicopters,” offers the same trip for $195.)

Lilium isn’t the only company with designs for flying taxis. There are more than 100 different electric aircraft programs in development worldwide, with big names including Joby Aviation and Kitty Hawk, whose models are electric rotor rather than jet powered as well as planned offerings from Airbus, Boeing, and Bell, which is partnered with Uber.

Story 2:  Nike’s app will use augmented reality to determine your shoe size

Source:  BGR Story by Mike Wehner


Link: https://bgr.com/2019/05/11/nike-fit-app-shoe-size/

https://img-s-msn-com.akamaized.net/tenant/amp/entityid/AABdNvJ.img?h=630&w=1200&m=6&q=60&o=t&l=f&f=jpg

See video here: https://www.youtube.com/watch?v=LMXc_1qCa8E

Do you know your own shoe size? You might think you do, but there’s a good chance you’re walking around in shoes that aren’t actually a perfect fit. Nike thinks it can change that with a new addition to its smartphone app that leverages the power of augmented reality to perform a super-accurate scan of your feet, matching you with the ideal size, depending on the kind of shoe you’re shopping for.

In a new blog post, Nike calls out the current system of shoe sizing as “antiquated,” calling it “a gross simplification of a complex problem.” That’s where Nike Fit, the new sizing feature, comes in.

That’s a whole bunch of flashy terms that don’t mean a whole lot to the average shoe shopper, but the gist of it is that the app will allow you to take a scan of your feet when you’re about to buy a pair of kicks. Nike Fit will be an option alongside the traditional sizing list, but the company is obviously promoting this as the ideal way to get the perfect fit.

“Using your smartphone’s camera, Nike Fit will scan your feet, collecting 13 data points mapping your foot morphology for both feet within a matter of seconds,” Nike explains. “This hyper-accurate scan of your unique foot dimension can then be stored in your NikePlus member profile and easily used for future shopping online and in-store.”

Your Nike Fit scan will be stored and can be accessed by the app whenever it needs to guide you to your perfect pair of new shoes. Perhaps most interesting, this new sizing system may actually change its size recommendation for you based on the kind of shoe you’re shopping for.

Nike says “different shoes are made with different performance intent,” meaning that you’ll probably want something like a running shoe to be a little bit tighter than the kind of everyday sneaker you wear in casual situations. The app handles all those decisions on the backend, and Nike seems pretty sure it knows exactly how to set you up with the size you need.

The feature isn’t available just yet but it should be available sometime in July in the U.S. and August in Europe.

Story 3:  This new Chinese camera that can spot you from 28 miles away

Source:  BGR Story by Andy Meek

Link: https://bgr.com/2019/05/09/new-camera-technology-chinese-28-miles/

A team of Chinese researchers has just announced their creation of an inexpensive but powerful camera that’s the size of a shoebox and which seems destined to further chip away at privacy and individual anonymity — what’s left of them, at any rate.

The camera is meant for surveillance and target recognition and is reportedly capable of spotting someone — really, anything — from up to 28 miles away, even in conditions that would otherwise obscure sight, like smog. And it’s able to do so based on laser technology and a reliance on artificial intelligence.

According to the MIT Technology Review, researchers from the University of Science and Technology of China in Shanghai figured out how to photograph subjects from so far away, even in a smog-filled urban environment, by using “single-photon detectors combined with a unique computational imaging algorithm that achieves super-high-resolution images by knitting together the sparsest of data points.”

It’s a major advancement for this kind of technology. Previous such cameras, for example, could only resolve and make sense of imagery captured by bouncing a laser off it from 10 miles away. The researchers, though, had to achieve multiple breakthroughs in order for their camera to actually work.

One is the 1,550-nanometer infrared laser, which reportedly won’t damage the human eye and is still able to penetrate barriers like fog to find its far-away subject.

In tandem with that, the researchers also needed a way to combine the points captured by their camera, which aren’t enough on their own to derive a complete image from. That’s why they also came up with an AI algorithm they trained to make sense of the data even when captured from a great distance.

According to a paper published by the researchers, the potential applications here include remote sensing and airborne surveillance, with the camera design helping open up a new pathway “for high-resolution, fast, low-power 3D optical imaging over ultra-long ranges.”

Story 4:  First AI that sees like a human could lead to automated search and rescue robots, scientists say

Source:  UK’s Daily Mail Story by Victoria Bell for Mailonline

Link: https://www.dailymail.co.uk/sciencetech/article-7036517/First-AI-sees-like-human-lead-automated-search-rescue-robots.html?ns_mchannel=rss&ns_campaign=1490&ito=1490

  • Computer scientists have taught an agent to take snapshots of its surroundings
  • Most artificial intelligence systems are only trained for very specific tasks
  • It takes glimpses around a room it has never seen before to create a ‘full scene’
  • The team of computer scientists say the skill could be used for search-and-rescue missions and would be equipped for new perception tasks as they arise

The tech, developed by a team of computer scientists from the University of Texas, gathers visual information that can then be used for a wide range of tasks.

The main aim being that it could quickly locate people, flames and hazardous materials and relay that information to firefighters, the researchers said. 

After each glimpse, it chooses the next shot that it predicts will add the most new information about the whole scene.

They use the example of a human being in a shopping centre they had never visited before, and they saw apples, you would expect to find oranges nearby, but to locate the milk, you might glance the other way. 

Based on these glances, the agent infers what it would have seen if it had looked in all the other directions, reconstructing a full 360-degree image of its surroundings. 

When presented with a scene it has never seen before, the agent uses its experience to choose a few glimpses. 

Professor Kristen Grauman, who led the study, said: ‘Just as you bring in prior information about the regularities that exist in previously experienced environments – like all the grocery stores you have ever been to – this agent searches in a nonexhaustive way.’

‘We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise.

‘It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.’ 

‘What makes this system so effective is that it’s not just taking pictures in random directions but, after each glimpse, choosing the next shot that it predicts will add the most new information about the whole scene, Professor Grauman said. 

The research was supported, in part, by the U.S. Defense Advanced Research Projects Agency and the US Air Force Office of Scientific Research.

Story 5:  Quadriplegic Man Uses His Face to Move Wheelchair

Source:  Inside Edition Story attributed to Edition Staff

Link: https://www.insideedition.com/quadriplegic-man-uses-his-face-move-wheelchair-53143

See video here: https://www.youtube.com/watch?v=zARSY2UuEB4

Jim Ryan was a pilot for 38 years but that all changed three years ago while on vacation in Hawaii with his wife, Isabelle.

“We were just in the water cooling off, so the wave came and we dove and I got twisted around and the wave drove my head in the sand … I said, ‘Well, I’ll just swim up and we’ll be on our way,’ but nothing moved,” he said.

Jim was paralyzed from his shoulders down. He had to learn to speak again and moving on his own was impossible — until he received a fancy wheelchair that can be controlled by his face. 

It’s called the Wheelie, and it’s made by Hoobox and powered by Intel software. An algorithm learns to recognize Jim’s expressions and in turn the chair responds in real time. 

See video here: https://www.youtube.com/watch?time_continue=14&v=0lq4puMx4bI

Story 6:  MIT’s new robot takes orders from your muscles – Signals from your biceps and triceps tell this robot how to help you lift heavy objects.

Source: Popular Science Story by Rob Verger

Link: https://www.popsci.com/mit-robot-senses-muscles

See video here: https://www.youtube.com/watch?time_continue=1&v=t4iJRy41d3Y

Imagine you’re lifting a couch with a friend. You’re both at opposite ends, and need to communicate as to when to heft it up. You could go for it at the count of three, or maybe, if you’re mentally in sync, with a nod of the head.

Now let’s say you’re doing the same with a robot— what’s the best way to tell it what to do, and when? Roboticists at MIT have created a mechanical system that can help humans lift objects, and it works by directly reading the electric signals produced by a person’s biceps.

It’s a noteworthy approach because their method is not the standard way that most people interact with technology. We’re used to talking to assistants like Alexa or Siri, tapping on smartphones, or using a keyboard, mouse, or trackpad.

Or, the Google Nest Hub Max, a smart home tablet with a camera, can notice a hand gesture indicating “stop” that a user makes when they want to do something like pause a video. Meanwhile, robot cars—autonomous vehicles—perceive their surroundings through instruments like lasers, cameras, and radar units.

But none of those robotic systems are measuring a person’s flex the way this bot does. And in a situation where a person is lifting an object, a robot listening for voice commands or using cameras may not be the best approach for it to know when to lift, and how high.

The bicep-sensing robot works thanks to electrodes that are literally stuck onto a person’s upper arm and connected with wires to the robot.

“Overall the system aims to make it easier for people and robots to work together as a team on physical tasks,” says Joseph DelPreto, a doctoral candidate at MIT who studies human-robot interaction, and the first author of a paper describing the system.

Working together well usually requires good communication, and in this case, that communication stems straight from your muscles. “As you’re lifting something with the robot, the robot can look at your muscle activity to get a sense of how you’re moving, and then it can try to help you.”

The robot responds to your muscles signals in two basic ways. At its simplest, the robot senses the signals—called EMG signals—from your biceps as you move your arm up or down, and then mirrors you. You can also flex your biceps without actually moving your arm—tense your muscle, or relax it—to instruct the robot hand to move up or down.

The system also interprets more subtle motions, something it can do thanks to artificial intelligence. To tell the robotic arm to lift up or down in a more nuanced way, a person with the electrodes on their upper arm can move their wrist slightly up twice, or down once, and the bot does your bidding.

To accomplish this, DelPreto used a neural network, an AI system that learns from data. The neural network interprets the EMG signals coming from the human’s biceps and triceps, analyzing what it sees some 80 times per second, and then telling the robot arm what to do.

It’s easy to see how a system like this could help anyone tasked with doing physical labor, and this research was partially funded by Boeing. “We can see this being used for factories, [or] construction areas where you’re lifting big or heavy objects in teams,” says DelPreto.

Of course, factories already commonly incorporate robots; for example, a General Motors foundry in Michigan uses robotic systems to help with jobs that are heavy, dangerous, or both, such as holding the mold for an engine block up to the spot where hot liquid aluminum flows into it. That’s a job a person can’t, and shouldn’t, do.

But the MIT system would allow for an even more direct, and perhaps more intuitive, connection between humans and machines when they’re doing something like lifting an object together. After all, humans and robots excel at different kinds of tasks.

“The closer you can have the person and robot working together, the more effective that synergy that can be,” DelPreto says.

Story 7:  This Bricklaying Robot Is Changing the Future of Construction

Source:  Autodesk’s online magazine Redshift Story by Rina Diane Caballar

Link: https://tinyurl.com/yxnyvzr5

bricklaying robot hadrian x in action

See video here: https://www.youtube.com/watch?time_continue=2&v=5bW1vuCgEaA

Brick is one of the oldest building materials, dating back to 7000 BC for sun-hardened varieties and 3500 BC for the first kiln-fired blocks. It’s also among the most versatile, used for modern “abstract menorah” shapes and stunning arches. Even the method of laying bricks—spreading mortar, positioning a brick, and smoothing out excess mortar with a trowel—has remained the same for millennia.

Now, one company aims to augment this thousand-year-old tradition through technology. Australian-based construction-technology firm FBR (formerly Fastbrick Robotics) has developed Hadrian X, a bricklaying robot (named after the wall-building Roman emperor) that can do its work without any human intervention.

This technology could provide wide-reaching benefits, including addressing housing shortages around the world.

“There aren’t enough people to build houses fast enough,” says Steve Pierz, chief innovation officer at FBR. “We need to automate the process through mass construction, and this is one of the ways it can be done.” Pierz also sees an opportunity to rebuild after natural disasters strike. “I envision fleets of these robots putting up housing structures quickly in disaster areas,” he says.

Hadrian X’s precision could also improve efficiency, resulting in houses that can be built faster and cost less than traditional methods. And Hadrian X promotes a lean-construction approach in which productivity increases and waste decreases.

“We take a single source of data and from there, we’ll know the number of blocks and the amount of adhesive required given the lay pattern,” says Simon Amos, director of construction technologies at FBR. “So you’re fully informed up front what waste, if any, there will be.”

Hadrian X looks like a typical truck-mounted crane, but it’s assembled from complex components: a control system, a block-delivery system, and a dynamic stabilization system. Putting these systems together brings Hadrian X to life.

Once blocks are loaded onto the machine, it identifies each one and decides where it goes. The machine can also cut blocks into quarters, halves, or three-quarters when needed and store them for later use. These blocks are then fed into a boom-transport system and conveyed to a layhead, which lays out the blocks based on logic and the pattern programmed into the machine.

“The layhead is where the magic happens,” Pierz says. “So even with wind blowing and vibration shaking the entire boom, it’s holding that block, compensating hundreds of times a second to keep that block at the precise location.”

***use the link to read the entire article

Story 8:  Meet Doggo: Stanford’s cute open-source four-legged robot

Source:  The Verge Story by James Vincent

Link: https://www.theverge.com/2019/5/20/18632562/doggo-stanford-open-source-robot-four-legged-cute

See video here: https://www.youtube.com/watch?v=2E82o2pP9Jo

Students from Stanford University have welcomed a new addition to their campus: Doggo, a four-legged robot that hopes to find a home in research labs around the world.

Doggo follows similar designs to other small quadrupedal robots, but what makes it unique is its low cost and accessibility.

While comparable bots can cost tens of thousands of dollars, the creators of Doggo — Stanford’s Extreme Mobility lab — estimate its total cost to be less than $3,000. What’s more, the design is completely open source, meaning anyone can print off the plans and assemble a Doggo of their very own.

Open-source plans mean anyone can download and build Doggo for $3,000

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” Nathan Kau, a mechanical engineering major and Extreme Mobility lead, said in a university news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

Although Doggo is cheap to produce, it actually performs better than pricier robots, thanks to improvements in the design of its leg mechanism and the use of more efficient motors. It has greater torque than Ghost Robotics’ similarly sized and shaped Minitaur robot (which costs upwards of $11,500) and a greater vertical jumping ability than MIT’s Cheetah 3 robot.

Machines like Doggo are part of what some researchers think is a coming robotic revolution. Legged robots are becoming more capable, and companies like Boston Dynamics, Agility Robotics, and Anybotics are starting to position them as useful tools for jobs like site surveying, surveillance, security, and even package delivery.

Cheap robotic platforms like Doggo allow researchers to rapidly improve on control systems, the same way cheap quadcopters led to a huge boost in aerial navigation. Right now, Doggo and its ilk are made for universities and labs, but pretty soon, they’ll be trotting out into the real world.

Story 9:  Bad knees? Lockheed is working on a bionic brace – Exoskeleton technology is full of promise but hasn’t been widely adopted outside rehabilitation centers. This team wants to change that.

Source:  ZDNet Story by Greg Nichols for Robotics

Link: https://www.zdnet.com/article/bad-knees-lockheed-martin-is-working-on-a-bionic-brace/

A new bionic knee brace is in the works from a team that includes experts from defense contractor Lockheed Martin. Building on advances in rigid exoskeleton suits, the new brace uses soft robotic actuators and compliant materials, making it lightweight and, the researchers hope, practical to wear in certain real-world use cases.

Powered exoskeletons have been at the forefront of robotics research for years, but they’ve always run up against a market adoption problem. As mobility devices, they’re just too expensive to replace wheelchairs, at least for the moment. That’s left robotics visionaries searching for a market for a technology that’s felt ready for prime time for years.

Exoskeletons from the likes of Ekso Bionics have found a niche market in rehabilitation, helping stroke victims, for instance, recover lost motor function. They’re also beginning to see adoption in limited industrial applications, including in heavy industries like ship building, where workers hoist heavy tools overhead and are prone to repetitive stress injuries.

The bionic knee brace is an attempt to port the same technology found in exoskeleton suits to a more specific problem: Knees tend to go bad, especially among populations that use them vigorously, such as soldiers, firefighters, and industrial workers.

The team behind the project includes members from the Georgia Institute of Technology and exoskeleton researchers with Lockheed Martin. They’ve teamed up with flexible hybrid electronics (FHE) companies NextFlex and StretchMed to bring a prototype one step closer to life, including human testing, in just six months.

This soft robotic design builds upon Lockheed’s ONYX lower-body powered exoskeleton, which Lockheed has been developing to provide running endurance and lifting strength for soldiers and which utilizes the power of FHE [Flexible Hybrid Electronics] to read user actions and adjust the device’s knee torque in near real-time.

The prototype uses novel epidermal sensors to acquire reliable data on how a user is using their knee from moment to moment, translating that information and learning to decode the user’s intent.

The sensors are flat and flexible and read biometric data such as EMG [Electromyography], temperature, and pressure. Sensor-embedded soft actuators then provide necessary support, adding structure to portions of the knee when most beneficial. 

For now, there’s no immediate roadmap to bring the prototype to market. With Lockheed’s involvement, the military is a likely first customer base. But the technology could soon help some 52.5 million Americans who suffer from arthritis and osteoarthritis.

Story 10:  Google plans to press play on its Stadia cloud gaming service in November

Source:  USA Today Story by Mike Snider

Link: https://www.usatoday.com/story/tech/news/2019/06/06/googles-stadia-cloud-gaming-system/1350372001/

The Google Stadia controller used for playing games on Google's video game streaming service. The Stadia controller (priced separately at $69) uses WiFi to connect directly to the game running in Google's video game streaming service .the data center and comes with two buttons for quick access to capture footage and the Google Assistant.

Google has shed some more clarity on its upcoming cloud-based video game service: an entry price, launch window and some of the games you will be able to play.

Google’s Stadia will become available in November with an entry price of $129.99 for the Founders Edition package (pre-order on Google’s Stadia site), which includes a game controller, Chromecast Ultra streaming device and a three-month subscription.

Cloud gaming promises to make it easier for consumers to play online games, as it sidesteps the need for pricey gaming PCs or console video game systems.

Google isn’t the only big player looking to deliver games in the cloud, stored on massive data centers and played via broadband. Microsoft, Amazon and Apple have all teased plans to deliver online games. The global cloud gaming market is expected to grow from $234 million in 2018 to $2.5 billion by 2023, according to estimates from data and information provider IHS Markit.

Like Netflix, Stadia has a higher quality tier: the Stadia Pro subscription, which is $9.99 monthly (after your initial three months pass). For that you can stream games in 4K video at 60 frames per second and with 5.1 surround sound on your TV, computer, tablet or Pixel 3 or 3a smartphone – eventually more phones and devices will be added, Google says.

A free Stadia Base level, coming in 2020, lets you play in high definition on your computer or Pixel smartphone  – but not on your TV. Stadia Pro subscribers will regularly get free games to play and discounts on games they purchase. (Google has not announced prices for games on Stadia.)

Since you will pay for some games, Stadia is not an “all you can eat” service like Netflix. You don’t need a Stadia subscription to play on the service, but in addition to not being able to play in 4K, Stadia Base players will not get discounts on the games for purchase.

When Stadia launches in November, the service will have at least 31 games from 21 different publishers including Bethesda Softworks (“Doom Eternal,” “Rage 2”), Bungie (“Destiny 2”), Ubisoft (“Assassin’s Creed Odyssey,” “Tom Clancy’s The Division 2”), Warner Bros. (“Mortal Kombat 11”) and 2K (“Borderlands 3” and “NBA 2K”). 

Electronic Arts and Rockstar Games also will have titles on Stadia but have not yet announced them. Also on board are exclusives from smaller independent game makers Codesink (“Get Packed”) and Tequila Works (“Gylt”).