Favorite team:LSU 
Location:
Biography:
Interests:
Occupation:
Number of Posts:29101
Registered on:11/17/2003
Online Status:Not Online

Recent Posts

Message
quote:

So, what happens when people average 45 in a 55? Does that perhaps cut the capacity of the road? I'd like to shut up the "Speed limit means I'm allowed to go a lot slower, you can never be too safe," crowd.
It does cut capacity but only a little bit. Safe headway is about 2 seconds in either case.

The bigger problem with going 10 under is the speed variance it creates which leads to more accidents.
quote:

$20/month to replace $800/month? I'll take it.
If that type of trade works out, sure, but if we all start using the type of resources (currently) required to build custom apps to replace all our services, $20/month probably won't be enough. Devs who use AI can run up bills in the hundreds easily, and that's by giving smart prompts. If a billion dummies start tossing around shitty prompts it can get expensive.
quote:

There are also free versions of AI floating around. China is going to flood the market with them to sink our tech sector.
Free on the front end. It will be paid for. Might want to stick with the $800/mo.
quote:

The edge computer only sends lightweight METADATA back to the central server. Instead of sending a gigabyte of video,
I'm not even considering video at all in my AI ideas, but yes vehicle detection and tracking from video should absolutely be done at the edge. Then discard the video.
quote:

it sends a tiny text packet that essentially says: "Northbound lane has 5 cars, average speed is 35 mph, pedestrian waiting to cross."
Yeah so what I'm getting at is who/where is that data sent to? Certainly the next intersections in each direction, but that information is also useful to other intersections city-wide.
quote:

Because of EDGE AI, the central city server doesn't need to be a supercomputer. It just acts as a dashboard collecting tiny status updates and coordinating the broader grid.
It's the coordinating of the broader grid part that is a lot harder than it might seem. Keep in mind, if your intersection is emitting tiny packets of data, it will also be consuming tiny packets of data from multiple other sources. And all those tiny packets will have to be consumed by some machine and considered as a whole before emitting new packets to the edge.

Consider an accident 10 lights away... how should your intersection handle it? We want to avoid creating a mile of congestion that cascades down every cross street, right? Maybe the optimal strategy is to start delaying traffic within a certain radius from approaching the accident and accelerate traffic moving away from the site of the accident. Prevent congestion in the area by slowing vehicle accumulation and speeding dispersion. Responders can get to the accident and do their jobs faster, making it better for everyone.

As for the traffic we intentionally delay, do we let their navigation re-route independently, or do we send signals to coordinate their new routes to optimize? It's a lot to think about.
quote:

Hello Edge AI
Yep, even better if each vehicle or driver's phone could interact with the intersection's edge compute directly. But a central controller (or clustered decision-maker) would still be required to coordinate things, otherwise each intersection would compete to optimize itself at the expense of the system as a whole.
quote:

I want AI to take over all traffic light programming and timing
I wouldn't say I want it to "take over", but it should absolutely be used as a non-authoritative layer in the system.
quote:

What makes proper timing difficult?

It's difficult because of the sheer quantity and diversity of variables. If you only consider one intersection, you can tune and tweak it, take every variable into account, and get it "perfect" so that average wait times are minimized in all directions at all times.

But then add a second intersection where some of the same traffic interacts with both. Already it is likely impossible to optimize wait times at both intersections simultaneously. The best you can do is balance them to optimize the two-intersection system. And the problem gets harder exponentially as you scale up to hundreds of intersections. Each individual intersection will be pretty far from optimal in order to optimize the system as a whole.

And to truly optimize a whole city, the AI would need navigation data from every vehicle to avoid potential congestion. This might require an entire data center's capacity to keep up with it all if using AI.
quote:

I do think large-scale SAAS is in trouble and I'm so happy. This started the subscription model that's taking over everything in life.

But AI is SaaS
quote:

Probably not, especially over the long term. EV's weigh and average of 20%-50% more than a similar model/size ICE vehicle. More weight = more wear and tear on roads.
If only ICE passenger vehicles were on the roads, the roads would last almost forever. If only EV passenger vehicles were on the roads, the roads would last almost forever. The weather and vegetation would dominate the wear and tear. A single semi causes more wear and tear than thousands of passenger vehicles. I don't think the proportion of wear and tear should be a major factor in how much should be paid in taxes. If it were, the tax burden on trucking would be enormous.

I don't know why anyone would have a problem with road use taxes. Either that or tolls.

re: 7700 ryzen 7

Posted by Korkstand on 4/15/26 at 12:57 pm to
quote:

Is this a good cpu for running fedora
That's probably 10X the cpu necessary for fedora and 100X the cpu necessary for fedora headless
quote:

and some vms
This is far too vague. How many VMs, 2 or 20 or 200? What are they running? This could range anywhere from 10X overkill on the cpu to you need a cluster of 100 of them.
quote:

and some gaming
What kind of gaming will you be doing on linux?
quote:

I use my Windows devices more because so many legacy desktop apps at my place of work only run on Windows.
Funny enough, there are some legacy Windows programs that run better on Linux/WINE than on modern Windows.
I'm really not seeing the trap. QEMU might be the easiest format to convert. The entire point of virtualization is to escape the trap of a machine being tied to the hardware it's running on.

If it's just added complexity for you and you wouldn't take advantage of the benefits, ok. But you're going to have a hard time convincing me that virtualization, and especially proxmox, adds a layer of lock-in and removes flexibility. It's the opposite for me.
I haven't tried to escape proxmox, but it looks to be as easy as exporting the machine then importing to something else. Containers might take a little more but it looks do-able.

If you only have 2 nodes and a workload that fits it well, I guess it would be simpler to eliminate proxmox. Personally though I still like the added layer of separation and flexibility that virtualization gives you. Like in the case of an OS upgrade, even with just 1 machine, it's pretty simple to take a snapshot that I can restore if something breaks. Or spin up another VM on that host with an already-upgraded OS and move the containers over.
quote:

and (if everything goes well this week) no more PVE
Can I ask why you want to eliminate PVE? Proxmox has been the best thing I ever did for my homelab.
quote:

Apparently one popular move is to mix the Mac and cheese and the bbq beef brisket.
This is my go-to at every potluck. Mac and cheese and meat on a dinner roll. DGAF if I look like an 8 year old.
quote:

Guarantee the quality of American life would go up if you put the billions wasted on NASA into say building more gasoline refineries

$4.00 gas is a bigger issue than sending anyone to the moon.

NASA's budget is a drop in the bucket, less than half a percent of spending. It is an extremely cheap way to buy science, tech, industrial capacity, prestige, inspiration, etc. frick, just the national security implications of being left behind in the space race could easily justify a huge increase in NASA spending.
quote:

Technology to send men to the moon on a rocket....but dont have one good camera.

Kinda like the first go round
quote:

Man one thing AI does for me that's mindblowing is it stops me from being scared to do things. I'm a genuinely curious person, but always worried about doing something incorrect or whatever. Whether it's a home project or a work project.
Yep, and the funny thing is that sometimes it's dead wrong but it doesn't matter, it still convinces us to do things. Maybe it's having something to blame for being wrong? :lol:

re: what have you done with AI today?

Posted by Korkstand on 3/31/26 at 10:07 am to
quote:

quote:

home assistant dashboards and automations
Will you elaborate on this?
You're gonna have to tell us your starting point because HA is a whole world of stuff and elaboration could fill books. :lol:

Assuming you are starting at ground zero, here is what I will do with AI today:
quote:

Home Assistant is an open-source platform that lets you monitor, control, and automate devices in your home from a single system. It runs locally on hardware like a mini PC, Raspberry Pi, or server, and connects to thousands of devices—lights, thermostats, cameras, sensors, smart plugs, and more—across many brands. Because it operates primarily on your local network, it emphasizes privacy, reliability, and independence from cloud services.

What makes Home Assistant especially powerful is its automation engine and extensibility. You can create rules like “turn on lights when motion is detected,” “notify me if a freezer gets too warm,” or “shut down equipment if power quality drops.” Home Assistant often serves as the central integration hub—collecting data from devices, visualizing it on dashboards, and triggering actions based on conditions.
quote:

The govt is not going to ... shut your home network down.
Not until their preferred equipment is in all our homes.
I've told my wife if I'm ever on life support, unplug me. Then plug me back in, see if that helps.
quote:

Have you tried plan mode? I believe that is what it is designed for.
A little bit. It helps with outlining changes and generating more specific prompts, but it doesn't really help me map out the direction of things. I'm using ChatGPT to refine fuzzy concepts, make UI mockups, and figure out UX, and codex alone wasn't giving me good results.

I asked ChatGPT if I was doing it wrong :lol:
quote:

You should:

Keep using me for:

system design

UX philosophy

primitives

naming

mental models

Use Codex Plan Mode for:

“implement this exact thing”

“here’s the architecture, break it into steps”

“modify these files safely”



But of course this could change tomorrow.


And for context, I'm trying to build a pretty complicated app, the idea and concept for which I have referenced and refined in 100+ ChatGPT chats over the last year or so. GPT's memory has tons of rules and guidance accumulated to help me avoid leading codex down the wrong path. And instructing codex to update the markdown file of the current state of the code is the feedback loop that I was missing and should have been obvious in hindsight.
I've used codex off and on for a while, but as it's tuned for writing code but not necessarily conversing and "talking out" the direction of development, I've been struggling to produce nice things. Maybe I'm slow/stupid and everyone already figured this out, or maybe there's a better way/tool that I'm not aware of, but I've started doing the following:

I create a project in ChatGPT to collect chats related to a particular app. It guides me on what to build and how to structure it. When we decide we're ready, I have it produce a prompt for codex which is typically a 1k+ word outline of the specific changes to make, changes not to make, constraints, etc. This is *much* better input to codex than the few sentences I was writing before. Included in the prompt is instructions for codex to update a couple of design/state documents which describe the codebase. I feed those docs into ChatGPT so that it stays aware of the state of things to generate better prompts to feed back into codex.