Favorite team:LSU 
Location:Baton Rouge, LA
Biography:
Interests:
Occupation:
Number of Posts:12714
Registered on:10/15/2017
Online Status:Not Online

Recent Posts

Message
quote:

It’s real. BBC got the video first. And it’s him. It’s his riot/protesting ensemble. He wanted to look like a GI.

I haven’t read the other 17 or so pages to see if this got hashed out, but it’s a little troubling to see “it’s real, BBC got the video first” stated as fact followed by two sources that aren’t BBC.

I have no clue if this one is real or not (just because I haven’t dug into it much yet) but I’m confident about this:

If you see someone tweet a video saying “XYZ news obtained” whatever, you should really try to find and verify the reputable source before sharing it. AI image and video generation is getting insanely good, and we (society) need to start trying to vet this stuff before sharing it if we’re gonna have any fricking chance at separating the truth from the bullshite.

/rant
quote:

that’s what I thought surprised we haven’t heard from them yet but then again there presence isn’t strong out west.

Didn’t the scientist with the cold fusion cell escape from the Enclave in season 1? The guy from Lost who warned Goggins’ wife in elevator this week.

Or am I misremembering that?
quote:

how the Federal Govt never stepped in and fixed it is beyond ridiculous

Federal money is the only reason any of the big projects are moving forward right now. They aren’t just going to come in and foot the entire cost of a multibillion dollar loop. The state has to have skin in the game, and the state has more projects than money already.
quote:

I don’t see that as an apt comparison. I’ve yet to see anyone so much as attempt to explain how we go from LLM to AGI. As far as I can tell, one doesn’t evolve into the other.

Part of the problem is that people can’t agree on the definition of AGI in the first place.

That being said.. if we can create models that match or surpass human performance across an array of individual tasks, we should be able to combine those models via the mixture-of-experts approach, with a core LLM likely acting as the human-machine interface.

I don’t see LLM’s as “evolving” into AGI so much as the glue that makes AGI functionally possible. They’ll have to be combined with other types of models (for example speech recognition, computer vision, data analysis, etc.) to extend the capabilities. We are already seeing steps in that direction with the public models available today.

Ultimately I think the technical barriers are:
1. Identifying cognitive tasks/functions that are not adequately addressed by current AI models, and developing new models to attack them.
2. Creating the core model that handles goal setting, planning, and execution. Right now it looks (to me) like this is something LLM’s might be able to accomplish, but this could end up being something novel.
3. Connecting everything together and giving it the freedom to step out “into the real world.”

I don’t think any of these are insurmountable with the trajectory we are on right now. I think the real (practical) question is scalability. Building AGI is one thing. Scaling it is another, and I wouldn’t be surprised if we hit a major bottleneck in compute power before it’s all said and done.
quote:

What about eastbound? This seems to be the big problem on I-10/MRB due to a terrible design flaw of the eastbound off ramp of the bridge.

So the way they presented the data makes it hard to say exactly. They used StreetLight, which is a dataset built up from mobile device data (as I understand it). They broke BR and the surrounding area up into “zones” for origins & destination of bridge crossings. There were 2 zones for I-10 outside of the model area (one for I-10 past Port Allen, and one for I-10 past Ascension).

The issue is that the technical report doesn’t include full data for all eastbound and westbound crossing origins and destinations for every zone. Instead, they only included the top 5 origins in each direction. This is what it looks like:



So two things:
1. I-10 External (zone 3016 on the east side of BR) didn’t make the top 5 westbound origins. So I don’t have the exact percentage, but we know it’s less than 10.6%.
2. We don’t know what the westbound crossing destinations look like in detail. However, I think it’s reasonable to assume that on average, westbound destinations should roughly align with eastbound origins.

There is the following snippet which offers some additional clarity:



This is the model they developed from the data, rather than the actual StreetLight data itself. The width of each bar corresponds to a volume of average daily eastbound traffic across the I-10 bridge. You can see how the bar is widest on the bridge itself (since that’s 100% of the volume) and quickly falls off as you get to I-10 and I-12 further east. This is a result of traffic exiting the interstate.

To my eye, it looks like something like half of the eastbound bridge traffic exits before the 10/12 split.
quote:

Some local engineer and state senator proposed a plan that would make a bypass around Baton Rouge by improving Highway 1 to tie in with the bridge at Luling (and possibly the 610 bridge). The estimated cost of the entire project was less that the cost of building one bridge across the Mississippi, but wasn't considered because it would adversely affect "historic Plaquemine."

I’m beating a dead horse at this point, but the published traffic data shows that a small percentage of I-10 bridge crossings would be affected by this.

The data shows that less than ~10% of westbound crossings originate from I-10 past Ascension. (It’s not clear how much less than 10% comes from I-10 past Ascension because they only listed the top 5 trip origins.)

In other words, vehicles traveling between I-10 west of Port Allen and I-10 east of Ascension make up a tiny portion of total daily traffic on the bridge. Most of the vehicles crossing the bridge have origins or destinations in East Baton Rouge parish. And that’s total average daily traffic.. without even accounting for the fact that the numbers are skewed more toward local traffic during peak congestion.

Contrary to popular belief, bypassing BR altogether doesn’t fix the issue. And if you were going to build a bypass around BR, you’d be better off doing a North bypass to I-12.. which is harder and still doesn’t have the impact people expect. They need more capacity to get people into/out of Baton Rouge, not around it.
quote:

How old are you? That's been standard tipping practice for decades until relatively recently.

15% was normal.
20% was for great service
10% was if they did the job but barely put in any effort.

What you are calling “standard tipping practice” is considerably better than what the other guy said. :dunno:
quote:

Why not build a limited access toll road either north or south of Baton Rouge for thru traffic? You get on it before BR, have one exit east of the river and one exit west of the river before merging back with I-10

Long answer:
You really need daily commuters to finance a toll road.. particularly in a state without a robust existing toll system.

Most of the peak traffic on the I-10 bridge is local (entering or exiting in Baton Rouge). Something like half of eastbound trips exit the interstate before the 10/12 split, with the remaining trips being split roughly 50/50 between I-10 and I-12 EB. On the west side of the river, about a third of eastbound trips originate from LA1 south of I-10. This is all according to the traffic modeling from the MRB site selection study.

That all makes a toll bridge primarily focused on commuter traffic somewhat viable, which is why it’s exactly what they want to build. The issue is that it also means the viability of the tollway decreases as you go further east and west of the river. Particularly on the west side, where a new freeway connecting from the south side of Plaquemine to I-10 would be pretty expensive.

They could have alleviated some of this by choosing one of the sites north of Plaquemine (Addis looked pretty good in particular) and utilizing the planned LA-415 connector along with improvements to LA-1, but then the issue is that all of the traffic hits Bluebonnet on the east side. Every option has its issues.

People fixate on the lack of a true controlled-access “loop” or “bypass” as a failure of the plan, but in reality the drivers most likely to use a full loop are also the least likely to be willing to pay the tolls. And the higher the price tag, the harder it is to get funding (and it’s already damn near impossible).
quote:

Why can’t the state of Louisiana build a tunnel under the Mississippi River at Baton Rouge instead of a bridge that will destroy the historic cypress forests?

If cypress forests had anything to do with the (lack of a) new bridge, this might be worth a look.

Except the issue isn’t cypress forests. The issue is paying for the bridge. So they damn sure aren’t going to switch to another, more expensive construction method.
quote:

10% Good job
15% Outstanding job
20% Outstanding job and you put yourself in harm's way.

Otherwise, they get 0.

quote:

I think it's generous but if you can convince me otherwise, I'm willing to listen.

I mean.. if your default is to tip 10% at a restaurant for a “good job” and 15% for an “outstanding job” then yeah, you’re kind of a shitty tipper.

It’s all relative though. At the place I worked in college, I generally made around 20% of sales each night. So for every person who left 10% that meant there was another who left 30% (or, more likely, something like 25% on a larger ticket). On that spectrum you’d be a pretty shitty tipper.

But there are definitely restaurants that cater to a clientele who see 10% as generous.. and on that spectrum, maybe it would be generous. So it kind of depends where you go out to eat I guess.

Most adults I know generally tip in the 18-20% range. I usually tip about 20%.. more if the service was great, less if it was bad. I don’t know if I’ve ever stiffed a server, but if I did it would probably have to be because of something egregious (not just because the service was below average).

Also the trend of weird service fees and whatnot really muddies the waters. I can’t really fault people for taking the stance that any kind of “service fee” or similar is, effectively, part of the tip.
quote:

If I do my job sub par, I get fired. I sure as hell don't get a 15% bonus.

Sub par = 0 tip.

Part of the problem is that (for servers in restaurants, at least) it’s not seen as a 15% bonus. Servers generally get paid $2.13/hour because the federal government assumes they will make at least minimum wage when accounting for tips. In other words: the federal government sees tips as part of their base income, not a bonus.

Another factor, at least when I was in college, was that they assumed you made 18% on cash sales unless you manually reported your cash tips. So you actually lost money (due to taxes) when you got stiffed on cash sales. It generally came out in the wash though, and wasn’t worth the BS of manually reporting everything. At least not to 19-year-old me.
quote:

Stupid take. Any adult knows the server can't control the quality of the food.

I don’t think anyone who’s ever actually waited tables would make this assumption. :lol:

For better or worse, customers generally have a difficult time separating the service from the overall experience.

All of that being said, I think it’s the tipping expectation at counter service places that really rubs people the wrong way. There are cases like OP or the ridiculous 40% tip tweet in this thread that are obviously bad, but the constant tip options every time you pay for something with a credit card are the biggest source of fatigue, IMO.
quote:

So you bring up a good point. How do scholarships work now in NIL world? NCAA allows for so many scholarship players and then walk ons. But if everyone is getting paid, isn’t that a scholarship?

Scholarship limits were eliminated under the House settlement. Instead, there are strict roster limits. There were already squad size restrictions for games (for example SEC rules dictate how many players may dress and how many players may participate for home/away teams) but now there are firm roster limits that take it a step further.

So in football, previously you had 85 scholarships but might have 120+ players on the roster. Now you only have 105 roster spots, but you can give scholarships to all of them if you choose.

The net result is that schools can offer a lot more scholarships than they could previously. This adds quite a bit of cost. Up to $1.5 million of that new cost counts against the $21 million revenue sharing cap. Since most P4 programs are fully-funding the new scholarships, this means that they actually only have $18.5 million for direct revenue sharing payments.
quote:

I will say there have only been a couple of surprises to me...OL...who signed with good SEC teams. But then again, the pudding proof will be do they start or are they just "depth pieces.' If depth pieces, then we could say

LSU bad OL starters = other teams good depth pieces

So I’m looking at the OL who transferred to SEC schools:
Tyler Miller (Miss State)
Ory Williams (Tenn)
Tyree Adams (aTm)
DJ Chester (Miss State)
Coen Echols (aTm)
Carious Curne (Ole Miss)

Miller and Curne were true freshmen this year. Williams and Echols were RS freshmen. I think it’s tough to say much regardless of what those guys do elsewhere. If Adams and Chester go on to become all-SEC players, that’s gonna say a lot about Brad Davis.
quote:

Did you do the math for each sport?

As a percent of total "Team sports" revenue from the chart:

Football 82.9%
Men's Basketball 8.93%
Women's BB 1.8%
Baseball 5.33%
All others 1.05%

So baseball is getting screwed, basketball is benefitting on both mens and womens side. And this is as of June 2024, I would bet men's basketball has fallen, WBB might be up since then. Baseball is likely up. Football is down

It’s even more stark when you look at revenue over expenditures (effectively “profit”) for each sport.

For the FY ending June 2025 (so these numbers do not include revenue sharing):

Football: $64.4M
Men’s Basketball: $2.5M
Baseball: ($0.9M)

Men’s Tennis: ($1.2M)
Women’s Golf: ($1.3M)
Women’s Beach Volleyball: ($1.3M)
Men’s Golf: ($1.4M)
Women’s Tennis: ($1.5M)
Women’s Volleyball: ($2.2M)
Women’s Soccer: ($2.5M)
Women’s Gymnastics: ($2.9M)
Softball: ($3.0M)
Swimming & Diving: ($3.6M)
Track & Field: ($6.8M)
Women’s Basketball: ($8.0M)

It’s a little odd to me that WBB would get a dedicated 5% while baseball gets lumped in with the 5% to “all other sports,” when WBB is bringing in less revenue while losing more money. I get it with MBB, but it seems kind of wild to tack another ~$1M onto your $8M annual loss for WBB, unless they’re making big cuts elsewhere.
quote:

The funding is not public money

Revenue share isn’t public money? Isn’t it paid directly by the athletic department?

Regardless I wouldn’t be surprised if schools try to dodge FOIA requests by considering the payments/agreements to be confidential student records or something.
quote:

Take the rumored liquidated damages portion of this contract that is “solely in the discretion of Washington” as to the amount. No chance that survives if challenged. They can’t just make up a number. They have to back it up.

Did you look at the agreement text linked on the previous page? This is apparently language from a University of Washington contract (from last summer) that a reported obtained via FOIA request. The liquidated damages are pretty clear:
quote:

If Athlete transfers or enters the transfer portal prior to the end of a Consideration Period set forth in Annex A, the Athlete will: (a) reimburse, or cause the transferee institution to reimburse, the Institution a prorated portion of the Consideration, equal to the amount paid by the Institution for the remainder of the Consideration Period; and (b) pay or cause the transferee institution to pay, as liquidated damages, the remainder of the Consideration not paid under Section 3(a) above.

If his agreement has the same language, the liquidated damages would be the full value remaining on the contract. It doesn’t say anything about those damages being up to UW’s discretion.

I think the part you’re referencing might be this:
quote:

The Institution in its discretion may, after good faith discussion with the Athlete, adjust the Consideration to reflect an increase or decrease in the Athlete’s NIL value (e.g., a Heisman Trophy win may increase the NIL value and reduced playing time may decrease the NIL value).

I’m not a lawyer, so not sure how this provision would be interpreted in conjunction with the LD’s. If the written contract says his Consideration is $4 million, can Washington try to say his NIL value increased to $8 million, therefore he owes them $8 million as liquidated damages? Surely that wouldn’t stand up as you said. But then what if Washington doesn’t try to take that stance?

Seems like it’ll be interesting to see how this plays out regardless.
quote:

the courts clearly stated that the athletes are NOT employees.

The NCAA has been fighting (on the schools’ behalf) to make sure the players aren’t classified as employees for decades. The courts aren’t keeping the schools from treating players as employees; the schools are.

In an era where schools are spending $20+ million of direct athletic department funds on rosters anyway, it might be time for them to re-evaluate that strategy. I suspect the biggest hurdle is that acknowledging them as employees will require them to collectively bargain to maintain a cap. But it seems like we are headed in that direction regardless at this point.
quote:

It was supposed to be the golden test for a while because I can remember having discussions about the implications if a machine were able to pass it. Now we don’t care, how many people would think they were talking to an actual person when using a chat bot.

It’s just a benchmark. It’s significant in the sense that it seemed like an incredibly difficult bar to pass for a long time, where now it seems almost trivial.

It doesn’t really tell you anything about actual “intelligence,” or sentience, or anything of that sort. It’s not particularly relevant to any discussion about an AI “singularity” other than as a demonstration of what AI has already achieved.
quote:

The Turing test….and no AI has passed it yet and no sign anyone is particularly close.

I’m just going to copy and paste something I posted in another thread on the same subject:

arXiv link
quote:

Moreover, GPT-4.5-PERSONA achieved a win rate that was significantly above chance in both studies. This suggests that interrogators were not only unable to identify the real human witness, but were in fact more likely to believe this model was human than that other human participants were. This result, replicated across two populations, provides the first robust evidence that any system passes the original three-party Turing test.

A 50% win rate would “pass” the three-party Turing test, as it would mean that participants were unable to distinguish between the AI and another human. GPT-4.5’s win rate was 73%.

That means that when asked to identify the human between GPT-4.5 and another actual human, nearly 3/4 of participants said that GPT-4.5 was human and said that the actual human was AI.

And that’s a model that was released a year ago.

That being said, I’m not sure what the Turing test really has to do with the singularity in the first place. :dunno:
Part Two - Methodology

I suspect most people won't care about this, but for those who do: I wanted to explain where the numbers come from.

Strength of record is a way of looking at a team's record and asking "how would other top teams fare against that schedule?" Generally it's reported as a probability. In this case, I'm reporting it as probability that the team's record is better than the record of an average top-12 team against the same schedule.

In order to build up strength of schedule and strength of record, you need some sort of predictive metric. There are several of them out there, and I'd say ESPN FPI and Bill Connelly's SP+ are the two big ones. I chose to use SP+ because Connelly has been pretty open about how the ratings are built up, which gives me a lot more confidence in them.

You also need some sort of "reference team" to measure SOS/SOR against. Usually you will see published SOS/SOR metrics use either "an average FBS team" or "an average top-25 team." The reference that you use can make a big difference on the calculations. Here's a simplified example to illustrate the issue:





An average FBS team would be expected to win 50% of their games against team A's schedule, because all 4 games are against other average FBS teams. However, they would be expected to win 55% of their games against team B's schedule because 3 of the opponents are really bad. In other words, team A has a stronger SOS for an average FBS team.

However, a better reference team (in this case an average top-12 team) is expected to win almost all their games against mediocre opponents. As such, team B's schedule is actually more difficult - and therefore has a stronger SOS - for an average top 12 team.

Here's a real-world example using Texas' and Oklahoma's 2025 regular-season schedules:



An average FBS team would find Oklahoma's schedule more difficult, but an average top-12 team would find Texas' schedule more difficult.

I actually looked at three different reference points for this analysis: average FBS team, average top-25 team, and average top-12 team. Here is the distribution of 2025 strength of record based on each reference point:



Ultimately I found that there wasn't a ton of difference between using top-12 and top-25 as the reference. The most notable difference happens when you use average FBS instead. I went with top-12 because to me, it makes logical sense when you're trying to compare top-12 teams.

So how do we actually calculate this stuff? Basically it comes down to calculating game-by-game win probabilities using the predictive metric of choice (SP+ in my case). We can convert the SP+ differential between two teams (our reference team and each opponent on the schedule) to a Z-score. To do this, we need the standard deviation. In the past I've used 17 points as the STDev for SP+. However, now I actually have enough data to calculate it since I'm already looking at 11 years' worth of games anyway:



This is also how I went about verifying home field advantage, which remained at 2.5 points as expected. So using our ~14 point standard deviation and 2.5 point home advantage, we can calculate a Z-score for any matchup and then convert that to a win probability. That's actually the easy part.

The hard part is then crunching the numbers on 11 years of data. In the past when I looked at SEC schedules only (for only 1 year) I used a Monte Carlo simulation. But I really didn't use enough discrete simulations then, and doing enough discrete simulations now takes a long arse time because of the size of the dataset.

As it turns out, it was easier to solve everything analytically. I used a script that actually generates every win/loss permutation of a given team's schedule, at which point I can use the single-game probabilities to determine overall probability of each win/loss record.

I tested my script by running some sample probability distributions:





By the way, this is why I've been ragging on Ole Miss' schedule for 2 years now. An average top-12 team would have just under 50% probability to win 10+ games against LSU's 2025 regular season schedule, but would have over 50% probability to win 11+ games against Ole Miss' 2025 schedule. In other words, the schedule difference between Ole Miss and LSU is basically equivalent to spotting an entire game. Wild stuff.

Anywho, once I know my script works, I can run it over the entire 11 year period and then start comparing data with the CFP rankings. :geauxtigers:

There is one issue that I've noticed - as I mentioned in OP, I used the penultimate CFP rankings (prior to conference championship weekend) to remove the somewhat subjective value of conference championships from the analysis. However, the Big 12 did not play a conference championship game from 2011-2016. Instead, their final regular season game happened during conference championship weekend. So unfortunately, this means the snapshot is looking at Big 12 teams before they actually completed their regular season (at least from 2014-2016). I don't really have an elegant solution for this problem, so at this point it is what it is.

No idea whether anybody actually cares about any of this crap, but it's a side project I've been working on for a while (because I'm a nerd) and I have nowhere else to share it. :lol:
TRIGGER WARNING - LONG POST AHEAD
TL;DR: Look at the graphs. I don't know how to talk about this in less than 1,000 words. I am who I am. Sorry.

Around this time last year, I posted this topic analyzing the disparity between conference schedules among SEC teams in 2024. At the time, I thought it would be interesting to do an expanded strength of schedule/strength of record analysis looking at not just SEC teams, but all of FBS. One thing I was particularly curious about was how the CFP Committee rankings compare with calculated strength of record over the years.

There are various places to find this information - for example, FPI has strength of record data that you can compare to the CFP rankings - but I really wanted a data source that I could dive into beyond some FPI numbers on a web page. So.. I built my own.

I'll add a separate post detailing the process, but here's the short(ish) version: I pulled historical SP+ ratings, CFP and poll rankings, and game results from collegefootballdata.com. I pulled this data for the entire CFP era to-date - 2014 through 2025. I then built a tool to calculate strength of schedule, using SP+ data, for every FBS team over that 11-year period. My tool also calculates strength of record using the same SP+ data, and there are several levers I can pull to tweak the parameters / evaluate the results.

Methodology
Before I get into some of the results, a couple of quick notes & definitions just to make sure it's clear what we are looking at:

Snapshot in Time - End of Regular Season, Prior to Conference Championships
This is probably the most critical piece of the puzzle. You see, one of the issues with evaluating the CFP Committee rankings is that there's a subjective value placed on conference championships. There's no way for me to tell analytically whether this subjective value makes sense, and it really muddies the waters. To deal with this, all of the analyses that follow are based on the end of the regular season, prior to conference championship weekends. The entire snapshot for each season - records, rankings, schedule strength, etc. - is based on the end of the regular season.

Strength of Record
Strength of record, at it's most simplistic level, is a measure of how a team performed relative to the strength of their schedule. In this case, strength of record is reported as the probability that the team had a better record than an average top-12 team (in the given season) would have against the same schedule.

The Data
So with that out of the way, let's look at some data. My biggest question going into this was "is the CFP Committee focusing too much on W/L record and not enough on schedule?" So my first step was to take a look at calculated SOR vs. the CFP Committee rankings. Here is what that dataset looks like for the CFP top 25 over the past 11 years:



Note that these SOR values have been further normalized and re-centered, which allows comparison across multiple years (as long as we focus that comparison near the re-centering point, which is around the #10 team in this case). Here's what the same data looks like without that normalization and re-centering, for reference:



So going back to the normalized chart, I chose the top 10 as my center point for analysis. Originally I was looking at the top 11 - my logic was that most years, the top 11 teams in the CFP rankings should make the 12-team playoff. However, as it turned out, that wasn't the case either of the first two years of the expanded playoff. So I figured top 10 might make more sense.

The data points in magenta represent teams who were ranked in the top 10 by the committee, but did not have a top 10 strength of record. The data points in green represent teams ranked outside the top 10 by the committee, despite having strength of record in the top 10.

So the next question is.. who were these teams? Let's take a look:





Some of these are interesting. 2022 LSU obviously jumps out, but if you look at the SOR you'll notice that it's very low compared to the rest of the list. LSU had the 9th best SOR at the end of the '22 regular season primarily because there was a pretty weak field in 2022. Also worth noting that considering this is a snapshot before the SECCG, LSU very well may not have made a 12-team CFP in 2022 even if they were "properly" ranked by the committee.

Another that jumps out is 2025 BYU. Their 0.627 SOR means that their record, given the teams they played this year, is 62.7% likely to be better than an average top-12 team playing the same competition. They had a top-4 SOR but the committee had them ranked #11 prior to the conference championships. Ouch.

Here is another way of visualizing the same data:



The magenta dots represent teams that were ranked in the CFP top 10 at the end of the regular season. The x-axis is strength of schedule (schedules get harder as you go to the right) while the y-axis is strength of record (resume gets better as you go up).

I think this plot kind of tells the story I expected to tell, but only if you squint at it just right. The story would be that teams are better off at 10-2 with a weaker schedule than 9-3 with a harder schedule, even if that 9-3 record would actually be better because of the schedule difficulty. But you aren't talking about that many cases, and it's really at the margins (in that 0.2-0.4 SOR range, near the bottom of the expected CFP field).

The last thing I thought about was the reality that the CFP committee probably didn't care that much who was ranked #10 back in 2015. The 12-team playoff puts a higher level of scrutiny on the #8-12 (or so) teams in the rankings. So what if we only look at the two years so far of the 12-team playoff?





I think this looks a bit tighter. Again the biggest outlier is 2025 BYU, who really seems to have been screwed in the penultimate rankings.

Conclusions
All-in-all, I would say this analysis makes the CFP rankings look... better than I expected, actually. There are some clear head-scratchers, but overall it seems fairly reasonable considering we are looking at 11 years of data here. I have to admit, I was a bit surprised.

One thing that this analysis does not tackle, though, is how the rankings change following conference championship weekend. This is much harder to objectively analyze as I mentioned before. How do you put an objective value on a conference championship, beyond simply adding it to the win total/SOS calculation? It's also worth noting that some of the most controversial CFP committee decisions - particularly moving FSU out of the top 4 in 2023 - happened after conference championship weekend.