Home Science & Technology Does Tesla’s “Total Self-Government” approach have a “swing problem”?

Does Tesla’s “Total Self-Government” approach have a “swing problem”?

143
0


Tesla fans are now well aware of Tesla’s approach to “complete self-management,” but I’ll give a very brief summary here to make sure all readers are on one page. In principle, Tesla drivers in North America who bought the Full Self-Steering package and passed the Safety Score test currently have an activated door-to-door / Full Self-driving beta version of their Tesla autopilot. If I put the destination in the navigation of my Tesla Model 3, when I leave the driveway, my car will go there by itself – theoretically. It’s not close to perfection, and drivers need to keep a close eye on the car as it drives to intervene if necessary, but now it has ample opportunities to drive “anywhere”. When we drive with Full Self Driving (FSD) enabled, when a problem arises (either a denouement, or when a driver clicks on a video icon to send a video of a recent drive to Tesla headquarters), members of the Tesla autopilot team look in the clip. If necessary, they re-enter the script into the simulation program and respond appropriately to the problem to teach Tesla software to handle the situation.

Tesla FSD in action. © Zachary Shahan / CleanTechnica

I. gained access to FSD Beta a few months ago (early October 2021). When I got it, I was very surprised at how bad it was in my area. I was surprised because 1) I’ve seen a lot of hype about how good it is (including from Ilona Mask and other people I usually trust when it comes to Tesla issues), and 2) I live in a very comfortable for driving area (a suburb of Florida). When I started using FSD Beta, I just didn’t expect it to have significant problems with basic driving challenges in a driving environment that is about as simple. However, I kept some hope that he would learn from his mistakes and the reviews I sent to Tesla headquarters. Of course, it can’t be hard to fix some outrageous issues, and each update will get better and better.

Since then I have seen some improvements. However, the updates brought new problems! I didn’t expect this, at least not to the extent I saw it. I pondered this for a while. Basically, I was trying to understand the reasons why Tesla FSD is not as good as I hoped it would be so far, and why it sometimes gets much worse. One potential problem is what I call the “duck problem”. If my theory is valid to any significant degree, it may be a critical error in Tesla’s approach to widespread, generalized self-management.

What worries me is that when Tesla fixes these issues and downloads new software to Tesla customers’ cars, those fixes create problems elsewhere. In other words, they are just playing software swings. I’m not saying this is exactly the case, but if it is, then Tesla’s approach to artificial intelligence may be inadequate for this purpose without significant changes.

As I drove for months thinking about what the car sees and how FSD software responds, I realized there is much more nuances of management than we usually understand. There are different signs, differences in the roadway, differences in traffic and visibility, animal activity and human behavior that we notice and then decide to either ignore or react to – and sometimes we watch closely for a while. choose between these two options because we know that small differences in situations can change the way we should react. The things that make us react or not are very broad and very difficult to put in boxes. Or, to put it another way: if you put something in a box (“do so here”) based on how a person should react to one drive, it is inevitable that the rule used to do so will not apply correctly in a similar but different scenario , and will cause the car to do things it shouldn’t (e.g. react instead of ignoring).

Let me try to express this more specifically, more accurately. The most common route I take is the 10-minute route from my home to my children’s school. It is an easy ride on mostly residential roads with wide lanes and moderate traffic. Even before I had the FSD beta, I could use the Tesla autopilot (adaptive cruise control, lane keeping and automatic lane change) on most of this route, and it did its job perfectly. The only reason why it was not used almost all the way was the problem of potholes and some particularly bumpy areas where you need to drive in the lane centric, so as not to gnash your teeth (just a little exaggeration). In fact, other than these problems with comfort and tire protection, the only reason he couldn’t drive all the way was because he couldn’t make turns. When I passed the Safety Score test and got the FSD Beta, it also meant giving up using radars and relying on “vision only”. The new and “improved” FSD software could hypothetically perform the same task, but could make those twists. However, FSD Beta using vision only (no radar) had problems – first and foremost, a lot of phantom braking. As the new version of FSD Beta is released and some Tesla enthusiasts will admire how much better it is, I have been looking forward to updating and testing it. Sometimes it got a little better. Other times it got much worse. Lately, it has engaged in some insane phantom reversal and more phantom braking seems to have responded to different signals than in previous drives. This is what led me to suggest that fixes for issues found elsewhere by other Tesla FSD Beta users have led to overreaction in some of my driving scenarios.

Tesla FSD on a residential road. © Zachary Shahan / CleanTechnica

In short, I guess the overly generalized system – at least based solely on vision – can’t adequately respond to the many different scenarios that drivers encounter every day. And solving every little trigger or false trigger just the right way involves too many nuances. Teaching software to slow down for “ABCDEFGY” but not for “ABCDEFGH” may be easy enough, but teaching it to respond properly to 100,000 different nuances is impractical and unrealistic.

Perhaps Tesla FSD can achieve an acceptable level of safety with this approach. (At this point, I’m skeptical.) However, as several users have noted, the goal should be to ensure that the drives are smooth and pleasant. With this approach, it’s hard to imagine that Tesla could reduce phantom braking and phantom cornering, enough to make the ride experience “satisfactory”. If I succeed, I will be happily surprised and one of the first to note this.

Visualization of Tesla FSD in the mall parking lot. © Zachary Shahan / CleanTechnica

I know it’s a very simple analysis, and the “swing problem” is just a theory based on user experience and a rather limited understanding of what the Tesla AI team is doing, so I’m not saying it’s a certainty. However, at the moment it seems more logical to me than to assume that Tesla will adequately teach AI to ride well in different environments and scenarios where it deploys FSD Beta. If I’m missing something or have a clearly erroneous theory, feel free to tell me in the comments below.


 


Advertising




Evaluate the originality of CleanTechnica? Think about becoming a Member of CleanTechnica, supporter, technician or ambassador – or cartridge on Patreon.


 

Do you have tips for CleanTechnica, want to advertise or invite a guest for our CleanTech Talk podcast? Contact us here.

Previous articleSelf-assembled logic circuits for printing created from proteins
Next articleMotorola Edge 30 vs. Motorola Edge 20: the better successor?