Rendered at 19:38:38 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
apexalpha 11 hours ago [-]
As I play more with Agents like Hermes and Openclaw I've come to realise these truly are the new GUI.
I have Radarr and Sonarr running on my homeserver. I switched my model to cloud Claude, pasted the API docs of said apps and told it to make 'search, add, remove, update, and statusupdate' available in a small MCP.
It took 7 minutes, I switched back to my local Qwen3.6 model and I haven't touched the webinterface of Radarr and Sonarr in weeks. I just ask the model.
Everyone now gets a chat with my (telegram) AI bot in stead of relaying requests through me.
I have been looking into a decent local device. DGX Spark, Mac Studio etc... I think I am willing to spend on this, it really does feel like the 'iPhone moment' for me: I am not going back to individual front-ends for everything when my AI bot is a unified frontend for all API based software.
echelon_musk 9 hours ago [-]
> in stead of relaying requests through me.
Overseerr is a thing.
bakies 4 hours ago [-]
People just text me their request, you want me to setup yet another *arr service to have people setup accounts or me sending passwords around that's not behind my safe VPN and is the only thing connected to my home network that i would expose and it's setup different so my maintenance is different and more work and people wont remember the password or account and blah blah blah.
Yeah I'll just have them text me and I'll paste that into my AI chat and it'll be done.
echelon_musk 3 hours ago [-]
> you want me to setup yet another arr service to have people setup accounts*
Authentication to Overseerr is done with a Plex account. No new accounts.
apexalpha 9 hours ago [-]
1) You're missing the point. Overseer can do this, yes. AI model can work with ANY REST API.
2) How does Overseerr help? I've never really understood it if I could give my family access to Overseerr over a VPN I could just give them sonarr / radarr directly.
c-hendricks 2 hours ago [-]
> How does Overseerr help
It's nicer to use than Sonarr / Radarr, and a single interface for people to learn. It has suggestions for people that need that. There's also MCP for it, so instead of setting up N MCPs for Sonarr / Radarr / The Next Arr, you set one up for Seerr.
You can also link people's Plex Watchlist. So the whole request / add to Sonarr/Radarr can be done from anywhere they use Plex.
echelon_musk 6 hours ago [-]
Expose Overseerr over HTTPS either from your home network or on a free VPS which has the tunnel into the home network where Radarr and Sonarr live. This is simpler for users to go to a website instead of having to also use a VPN.
This of course assumes you already have Plex or Jellyfin exposed to the public internet without a VPN.
If you're already forcing users to use a VPN for Plex/Jellyfin then hosting Overseerr locally and making it accessible over the VPN should be trivial.
bakies 4 hours ago [-]
Overseer runs on plex's media port? Plex has the media port 32400 NAT hole-punched, but none of my other webservices are exposed outside my VPN.
echelon_musk 3 hours ago [-]
Do what I do and use the tautulli sqlite3 database to get the IP addresses of Plex users and then add them to an ipset filter in nftables/ip tables that whitelists them to access an nginx proxy serving Overseeerr on port 443. It's glorious. I'm sure any LLM can cook this up for you.
bakies 4 hours ago [-]
Oh man what a great idea! Stealing that. Don't even have to ask you to share I'm sure I can nearly one-shot the same service for myself. :)
ionwake 7 hours ago [-]
Your setup sounds awesome can I ask - would a high powered mac studio help yoru setup alot or not really? Im just wondering whether i truly need to upgrade from a 32gb macbook air to a studio with 256gb ram or , if thats just overkill for the models we have ( also the studio would be a m2 which is years old now).
apexalpha 52 minutes ago [-]
Well i run the new qwen3.6 or gemma4 now. I have a 48gb ram Macbook but its from work, so not really meant to be serving models 24/7.
And then for ‘big’ work I tend to switch to a cloud offering. With a 128gb mac I would be 100% offline probably.
What if I tell you u can use the agent directly with seerr. That way you don't have to make or maintain skills/mcps for each of the services.
OP said he does local inference, and you are complaining about 'wastefulness'. OP brought the request service to where his friends already were. For them, this is probably the better solution. Seerr cannot expose itself to discord/telegram and requires users to have another account.
I mean, sure you can have the AI code those too.
There are plenty of ways to solve problems, and some people do it differently. Sometimes, just to see if it works. This is a good thing. Not everyone has the time nor inclination to maximize efficiency.
c-hendricks 1 hours ago [-]
> Seerr cannot expose itself to discord/telegram and requires users to have another account
Seerr can use Plex accounts, assuming people are using OPs Plex library, there's no new accounts needed to be made.
> OP brought the request service to where his friends already were ...
Another thing Seerr can do is scan the watch lists of any connected Plex accounts, so assuming the friends are already using Plex, they'd already have access to it.
> Not everyone has the time nor inclination to maximize efficiency.
Wasn't that a thing LLMs were supposed to help with?
klueinc 10 hours ago [-]
How are you handling secrets? I want hermes to do stuff on the internet but I am not enthused about dumping the requried keys in .env.local or using process wrapper services like infisical yet. Encapsulating hermes in a docker sandbox feels slippery and I'm always left thinking if i've hardened my server enough.
apexalpha 9 hours ago [-]
Openclaw uses the API key for Sonarr / Radarr, no secrets management (yet).
Though egress is heavily restricted for OpenClaw and everything is behind a FW.
ulfw 7 hours ago [-]
There is no GUI in anything you're saying. How is this 'the new GUI'?
apexalpha 60 minutes ago [-]
Is a webapp with a frontend not a GUI? Thats what I meant.
dumbmrblah 5 hours ago [-]
I have a similar set up. I communicate/chat with the Hermes agent via Matrix chat client.
So rather than having to go to a bunch of different websites or apps to get things done, I've linked them all to Hermes
(via skills) and chat with the Hermes agent on my phone.
I want a movie? I just say download XYZ. Shows up in Plex 5 min later.
I want to research something with multiple different perspectives? Rather than going to OpenWebUI and using that, I just asked a Hermes agent to examine an issue from multiple different viewpoints and get back to me with a conclusion.
bitwize 5 hours ago [-]
It's the next paradigm in how people interact with computers.
tomwheeler 2 hours ago [-]
Back in January, I was debating whether to buy an M3 Ultra Mac Studio or waiting for the M5 version, which many believed would be announced two months later.
The 192GB M3 Ultra was on sale at the local Microcenter for $200 below what Apple's own site advertised. Since I knew the RAM shortage would significantly increase the price of the M5 Studio when (or if) it finally did come out, I decided to buy the M3. Time has shown that was the right decision.
rbanffy 11 hours ago [-]
I was hoping for an M5 mini and Studio ahead of time, but I guess I'll have to wait a little longer.
Maybe by the time they sort it out there will be an M5 Ultra Mac Studio with a full terabyte of RAM.
apexalpha 11 hours ago [-]
If I start the application process now I might have my second mortgage approved by the time said 1TB RAM M5 Ultra is available.
yomby 9 hours ago [-]
You own a house?!
ionwake 9 hours ago [-]
The second mortgage is to pay for the first mortgage
apexalpha 9 hours ago [-]
Might have to trade it in for the new Mac Studio.
bottlepalm 13 hours ago [-]
The Neo as well, I just need a Mac to test with and finding one is ridiculous. Hackintoshes are hell to setup and run like crap. I tried https://www.macincloud.com/ and that was a waste of time. Someone take my money.
montebicyclelo 13 hours ago [-]
Ever considered second hand slightly older gens? Even M1 is still great for many use cases. E.g. often corps are selling them on Ebay, in pretty good condition.
bayesnet 8 hours ago [-]
Mac prices on eBay are sort of bizarre. Back when I was looking (when you could still purchase a new one in a reasonable timeframe) many of the higher end listings cost as much or more (!) then just getting them from Apple. I ended up buying an Apple certified refurbished Mac Studio for less than the comparable eBay listing.
Not sure who’s buying these or if it’s just people dreaming about finding a rube.
morphle 13 hours ago [-]
Indeed, ton's of refurbished for $250-$300
bottlepalm 12 hours ago [-]
Yea it does look like 5 year old M1's are going for $300 on eBay, but man that's painful a 5 year old machine for that much when you could get a new Neo or Mini for $600, if only you could buy them. I probably should just get the M1, test and sell it back on eBay. Thanks.
deaux 12 hours ago [-]
If you're going to do any kind of work on it I'd choose a 5-year old 16 GB memory M1 over a Neo every single time. 8 GB is what's painful. The CPU difference is very small anyway.
awakeasleep 2 hours ago [-]
You’re thinking about these M series macs like they’re computers from the 2010s.
A M1 MacBook Air will perform virtually identically for most if not all laptop computing class tasks.
MikeNotThePope 11 hours ago [-]
If a $300 Mac can do the job, a $600 Mac is overkill.
solarkraft 7 hours ago [-]
They are similarly good. In fact I prefer my M1 MBA to the newer models because of its shape.
yreg 12 hours ago [-]
Neo isn't much better than M1.
Marsymars 3 hours ago [-]
The only place where it seems unequivocally better for mainstream use is hardware AV1 decode.
(If I were building a Jellyfin server today, I'd probably use a MacBook Neo.)
kalleboo 7 hours ago [-]
M1 is even better in some ways, like it has Thunderbolt ports (plural even!).
wang_li 6 hours ago [-]
In my personal experience, 13” M1 MacBook Air, 15” M4 Macbook Air, M4 Pro Mac Mini, and MacBook Neo, the Neo is the fastest for single threaded and strictly CPU bound tasks. E.g. calculating 200x200, 1000 max iterations Mandelbrot fractals it does ~785 in ten seconds compared to ~760 on the M4s and somewhere in the 600s for the M1.
Given its RAM size I’m not going to be spinning up VMs, but in terms of general purpose computing it’s more than adequate. And, out of the box, you get a word processor, spreadsheet, presentation, video editing, digital audio, web browser, and a bunch of other things. Xcode is free. This is easily a laptop you can buy and use for years in 90% of settings.
windowsrookie 3 hours ago [-]
I'm not really sure why your neo is performing better than the M4.
The M4 and the neo share the same CPU architecture but the M4 has 4 performance cores at 4.4ghz, while the neo has 2 performance cores at 4ghz.
The neo also does not have any CPU heatsink so it thermal throttles after only a few seconds:
Yes, all the M-series have more cores, they often have better thermal management, and they have more memory bandwidth. (The the Neo still has crazy high bandwidth.) But, for a single threaded, strictly compute task that runs in 10 seconds, it outperforms the M4 cores. I don't know why, I'm just sharing my experience.
If you literally just need to borrow one, I'd just buy an Air from Apple directly and then return it within the 14 day window. I'll sometimes do this if I need an extended repair on my personal one, or there's a new mac I want to try.
valleyer 12 hours ago [-]
This is unethical.
nirava 11 hours ago [-]
If the return policy explicitly allows "change of mind", I'd say it's in the gray area. Though ofc it isn't sustainable if everyone starts doing this. I assume there's a ((returns:buys)/payment identity) metric to ban the largest offenders.
Also, there should be some universally accepted way to have access to your data and a secure personal computer in the duration your device is getting repaired.
chongli 8 hours ago [-]
Also, there should be some universally accepted way to have access to your data and a secure personal computer in the duration your device is getting repaired.
Yes, exactly. When getting your car repaired there’s loaners or rentals to allow you to keep driving. Why isn’t a loaner computer a standard thing?
Marsymars 3 hours ago [-]
My local library will loan you a Chromebook for up to three weeks (three weeks reserved, can extend if there's availability) at no charge.
chongli 3 hours ago [-]
That’s fantastic for people in the Chromebook ecosystem. They can log in and away they go! Not so great for us in the Apple ecosystem though.
wildrhythms 8 hours ago [-]
Glad to see the hackers have arrived to defend a billion dollar corporation
lotsofpulp 7 hours ago [-]
Glad to see the people who do not understand tragedy of the commons have arrived.
Reviving1514 3 hours ago [-]
What issues did you have with macinacloud? In was considering using that to build and sign some dmgs and test my app and would love to know what your experience was like
bottlepalm 2 hours ago [-]
Awful. I needed to install an app to test it, couldn't install anything - the Mac literally has 0 GB free. It's not like having your own mac whatsoever. It's good for some narrow use case that I don't have. I felt totally duped by their advertising, luckily I think I lost only 30 bucks.
scboffspring 13 hours ago [-]
Not sure what happened to you with macincloud, but I used scaleway hosted Mac mini M1 a couple years ago for a self hosted CI server, and it was working very nicely.
bottlepalm 12 hours ago [-]
They don't really advertise that you can install nothing on the machine. It's super restricted.
ZiiS 12 hours ago [-]
I think you can at $50/month
hurricanepootis 13 hours ago [-]
I remember 4 years ago I was able to setup MacOS in a virtual machine. Maybe you can setup an Intel copy of MacOS on a qemu/kvm virtual machine?
bottlepalm 12 hours ago [-]
I tried this, https://oneclick-macos-simple-kvm.notaperson535.is-a.dev/, it wasn't bad for 'one click install' and got the VM running, it was just unusably slow and burning up my machine. Seems like I need graphics acceleration in the VM which doesn't work on my Ryzen GPU - QEMU running on Windows, and I don't want to deal with dual booting. Literally would rather buy a Mac (if I could).
mitjam 5 hours ago [-]
You need Mac hardware for this but I run a MacOS VM on Parallels, not connected to my iCloud account. I think it’s nice for OpenClaw and CI.
morphle 13 hours ago [-]
If you send me an email I might have a machine or MacOS VM on that machine you can use.
bottlepalm 12 hours ago [-]
Thanks for the offer, think I'm gonna go the eBay route.
dyauspitr 12 hours ago [-]
I don’t even think you can do hackintoshes anymore post Apple silicon right? I remember having one about 10 years ago and it was absolutely fantastic and ran really well. Wish we could do that now so I wouldn’t have to develop apps on my M3 MacBook Air, which constantly runs out of memory and is a huge pain.
forsalebypwner 11 hours ago [-]
You can for now, but in the very near future (macOS 27 I think?) Apple will completely drop support for Intel, and Hackintosh will be dead
wiradikusuma 13 hours ago [-]
I guess the sudden demand is due to OpenClaw? But most people will still use cloud LLMs, right? Anything particular with the Mac Mini that non-Mac lack?
zarzavat 12 hours ago [-]
Not just OpenClaw. The Mac mini is just stupidly good value for a desktop computer, and the RAM prices have only enhanced its appeal.
Apple doesn't make much of a fuss about it but their chip performance is laughably ahead of the other chipmakers.
The Mac Mini M4 gets a score of 3788 in Geekbench[0]. The top of the PC processor chart is 3395[1]. It's not even Apple's latest chip!
PC processors can only keep up by adding more cores, but real world performance in many workloads is enhanced by having a smaller number of higher performance cores.
Geekbench is basically trash. People keep using it for comparing Mac performance because many of the things people usually benchmark don't run on Macs.
But single-number outputs like that are useless. Is the number ~10% higher because it's consistently ~10% faster at everything, or because it's 100% faster on a minority of things and slower at everything else? The first one is pretty unlikely when comparing processors with different designs, and indeed that isn't it:
The CPU in those charts with a similar TDP to the M4 is the Ryzen HX 370. You can see that the M4 is ahead of it in a few of the tests (C-Ray, DuckDB, PyBench, FLAC) but in even more of them the M4 is at the bottom of the stack. (Only a third of those charts are actually performance; each performance chart is followed by two power consumption charts.)
And the ~20W TDP is a nice parlor trick (the HX 370 is the only one on the list that competes with it there) but in a desktop CPU that's pretty irrelevant. Whereas if you compare it to the CPUs that can be had for a similar price (e.g. Ryzen 9700X, 65W), it's only ahead in C-Ray and FLAC while losing quite badly in most of the others and subjecting you to unupgradable soldered memory that the PC hardware doesn't.
Meanwhile doing ray tracing on a CPU instead of a GPU isn't much fun, and FLAC is an audio codec so a ~10% improvement there is probably not going to be a big part of your day if you're not a full-time sound engineer. So does averaging those kinds of things in to make a single benchmark number make sense? Or should you be looking at the results on applications you actually use?
philistine 6 hours ago [-]
How are we supposed to trust these charts when it can't even be bothered to specify which Apple Silicon chip it's testing? The Mac Mini comes in two versions.
ffsm8 11 hours ago [-]
If you remove the Mac filter, its performance is not even in the top ten
Which is obvious if you spent more then half a microsecond thinking about it, because apple silicone barely draws any power - it's performance is fantastic in it's niche, which is squarely within what a home user cares about - but it's not leading on benchmark performance, because that's not what apple designed it for
The reason its coincidentally good for local ai inference is also just down to the fact the embedded GPU has shared memory access to the system VRAM. That means low performance/throughput but large memory.
Which is great for home use, but once again not gonna top charts.
zarzavat 11 hours ago [-]
Which top 10 are you talking about? If you mean the top absolute geekbench scores, those are always with the assistance of cryogenic cooling.
ffsm8 6 hours ago [-]
Sure, the top ten may be using highly advanced cooling.
And the fact that's possible should've already proven that Apple's decided on trade-offs that did not enable bleeding Edge performance, hence not going to top benchmarks.
But aside from that amount of transfer ability you should've been able to manage, you're ignoring that apple silicone is still being beat on all performance benchmarks even with stock settings.
Apple chose a performance profile for their chips, and it's not "highest performance while sacrificing cooling and energy usage". Others did. And apple did well not chasing benchmarks, as that'd be the epitome of idiocity for their target market. They're not targeting high performance servers with massive cooling setups. They're targeting mobile workstation and entertainment devices.
They do not have any need for bleeding Edge performance trade-offs. They need power efficiency and enough performance to feel snappy on all workloads people will run on these devices - which isn't benchmarks. because none of their users _need_ highly sustained processing power. its just not something they'd ever target.
and im not even adressing the fact that geekbench is notorius for being absolutely shit at showing actual processing power.
ashdksnndck 12 hours ago [-]
Mac mini has first-class access to iCloud, photos, iMessage etc. So if you are deep in the Apple ecosystem you might prefer it for that reason. I have a windows gaming desktop that I could use as a server for openclaw/cowork but I realized I simply don’t trust that system enough to give it access to all the personal stuff I’m giving to the AI. I trust Anthropic and Apple. I don’t trust whatever junk is running on my gaming desktop.
If you want to run local models, another advantage is Apple’s unified memory architecture. The biggest Mac mini has 64gb ram and Mac Studio has up to 512gb. Compare this little box to what monster Nvidia gpu system you would have to buy to get the same memory there. And how much your PG&E bill would go up. That doesn’t account for the shortage of basic $600 Mac minis though.
operatingthetan 13 hours ago [-]
An M4 mini is overkill just to run OpenClaw. I'm running it on a Pentium J5005 and it's running 20 other services in Docker. I think the main thing was many wanted it to be able to access iMessage. I think people dream of also using the mac to run the LLM but the 16gb ones don't have enough ram.
throwa356262 6 hours ago [-]
You can run nullclaw etc on a Pi zero. People who are paying big $ are mostly trying to run local LLMs.
Personally, I would rather pay a few bucks for Qwen or just use gemma4 which runs on a potato. But I guess we are all different.
apexalpha 11 hours ago [-]
When they say 'due to openclaw' they refer to running AI models that openclaw uses, not to openclaw itself.
hparadiz 12 hours ago [-]
The shortage is for the 512, 256, and 128 models.
ashdksnndck 10 hours ago [-]
The basic 16GB Mac mini is also hard to buy. I bought one used not to save money but because I couldn’t find any store online with it in stock.
reverius42 12 hours ago [-]
Those are the ones that can run the LLMs. Not a coincidence.
amelius 12 hours ago [-]
People are running openclown on microcontrollers.
hparadiz 12 hours ago [-]
You can look up benchmarks. It's different depending on the model of Mac Mini and Model of LLM.
The take away is that some of the Apple hardware hits a sweet spot for performance and price which may change in the future but for now it's causing a lot of demand so people can run inference without GPUs.
Also Macs keep a lot of their resale value so you can use them for a while and then sell them for sometimes 80% of their original value.
12 hours ago [-]
chillfox 12 hours ago [-]
Affordable ram!
I recently bought one for my k3s cluster, and it was the cheapest 16g ram I could get by a decent margin.
znpy 13 hours ago [-]
My understanding is that openclaw is only a factor, and a relatively minor one.
Most likely the limiting factor is the crunch that chip companies are going through.
mark_l_watson 7 hours ago [-]
The shortages are caused by memory manufacturers in Asia having supply chain problems, and the pending strike at Samsung, right?
1 hours ago [-]
dabinat 4 hours ago [-]
AI companies are buying up huge quantities of memory and the memory manufacturers have decided to prioritize those purchases over the consumer market. And Micron decided to just shut down their Crucial brand completely.
dawnerd 5 hours ago [-]
More the ai bros buying them up and scalpers trying to make a quick buck. There’s a bunch on marketplace now with a couple hundred in markup.
ksec 12 hours ago [-]
It is the SoC, not the memory as reported in the earning call. And lead time for SOC is 3-4 months. i.e even if they decided to increase order in March, if would be at least until July those volumes reaches warehouse ready to be shipped. And that is assuming there are spare capacity from TSMC for Apple to order, right now there is very little to none.
What annoys me most isn't the Mac Studio and Mini. It is the Neo. Someone must have done a poor job in demand planing. ( As well as pricing ). Only 5M unit till the end of the year when they are now increasing it to 10M. And it will likely miss this education's year cycle in the summer.
Hopefully they do better with A19 Pro Neo. Mac could reach up to 400M to 500M usage share. Roughly 25% of PC market.
akmarinov 12 hours ago [-]
Historically whenever they’ve done lower price products - iPhone SE, the current E editions, etc - they’ve sold poorly, that’s what probably got them.
The thing is that the Neo is actually useful.
joakleaf 10 hours ago [-]
I disagree.
I am old enough to remember the iPod nano -- Especially the 2nd generation. They were effectively low-priced and smaller iPods.
Apple sold millions of these much much quicker than the iPods and iPod minis (which came right before). Especially in 2006, it was _the_ "Christmas gift" just before the iPhone, iPod touch and later iPad mini took over. Possibly Steve Jobs' demo where he showed how they fit into the otherwise useless small jeans pocket helped convince the world.
The iPod nano effectively wiped out the competing music player market.
The Neo reminds me of the iPod nano and iPad mini. It is smaller and cheaper version of an existing successful product.
I think the iPhone SE and E are the outliers.
Royce-CMR 7 hours ago [-]
I think the outliers have burned them more recently and even Apple loses historical memory over time.
That said I remember everything you said and 100% agree - the nano killed everything around it. It’s been awhile since Apple had a similar home run; not an excuse for the clear lack of vision/leadership but a factor nonetheless.
someguydave 5 hours ago [-]
TSMC needs to get that phoenix fab churning out M5 cores pronto
projektfu 9 hours ago [-]
It reminds me of when the Mac LC came out. We waited months to receive ours.
apexalpha 11 hours ago [-]
>It is the SoC, not the memory as reported in the earning call.
Is the memory not part of the SoC?
fredoralive 9 hours ago [-]
It’s a separate die, even in chips like the phone ones where it’s in the same package as the SoC.
apexalpha 2 minutes ago [-]
Ah, right. I always thought a SoC was just a cpu * gpu and ram. Full package.
touristtam 12 hours ago [-]
Rightly deserved
polyterative 9 hours ago [-]
Loving my m4 mac studio. I expected this as well
bajor 12 hours ago [-]
More fomo
libertine 9 hours ago [-]
Since COVID the FOMO has been turned to 11.
I wanted to build a gaming PC, and now that's out of the question, even though I can afford to buy one in the current state. I just refuse to participate in this, so I quit.
There are thousands of great games that run on older hardware that would last me a lifetime of gaming.
Consumers always get the shit fed to them.
BerkeyMcBerkey 12 hours ago [-]
Opps, our lack of foresight into AI tripped us up, again.
amelius 12 hours ago [-]
This is just a preview.
At some point even the most economically liberal people will say that enough is enough. Making money and building capital is perfectly okay if you're working hard, but if you use said money or capital against the rest of us (who chose a different life) then we have a problem.
Auzy 11 hours ago [-]
The Mac Studio definitely shouldn't be.
My M2 studio was the only computer I ever owned that had issues with the USBC ports not working with certain cables (and for the price, it should have had better performance).
I've owned a M2 Mac Studio, PowerMac G5, Mac Pro. Every single one had flaws that you would consider inexcusable on PC Hardware priced half that amount.
The PowerMac G5 had terrible video cards (the liquid cooled ones also had issues with leaks, but ignoring that). The Mac Pro also had terrible Video cards (they were PCI-X), but also Fully Buffered ECC ram (which cost substantially more than any other ram)..
Apple still can't even manufacture a proper mouse (who the hell puts a USB C port at the bottom).
It's ridiculous..
If Linux distro's had a way to integrate Android as a first class citizen (like IOS is in MacOS), it would greatly boost the number of apps available in ecosystem, and have a huge impact on MacOS I feel. Waydroid is good, but, it still is too clunky (I'd like to see something more like Wine for Android, where its native)
jmalicki 11 hours ago [-]
But the Mac Studio can run LLMs reasonably faster better than non-enterprise-server setup. That is the only thing that matters at this point - it's an LLM accelerator, not a personal computer, at this point.
ajvs 11 hours ago [-]
Valve's Lepton (Waydroid fork) might solve this when it gets released.
t-writescode 11 hours ago [-]
Why ... did you put so much energy into typing this out? That's a lot of energy to put into .... something you don't like and probably shouldn't care so much about.
Do you wish you could go back to macs?
noisem4ker 9 hours ago [-]
Forbid they criticize Apple products that are in the very subject of the article.
Will you be around to police the discussion the next time something having "Microsoft" in the title is submitted and the shit-flinging-at-Windows-and-the-PC-world contest starts regardless to any relevance to the topic at hand?
iLoveOncall 11 hours ago [-]
Low supply doesn't mean high demand. I don't think many people are buying Mac Studios, so they just lowered production.
simonh 5 hours ago [-]
He said higher than expected demand. If he lied on an earnings call that's a prosecutable offence.
iLoveOncall 3 hours ago [-]
It can be true of the overall demand between the 2 products, since the Mac Mini is indeed very much in demand, but false for the Mac Studio individually.
simonh 3 hours ago [-]
In theory maybe, but I think he’d be on thin legal ground doing that. It seems more likely that lack of availability of minis would push people up the product range, and I don’t see why we would expect a drop in Studio demand. Is there any reason?
_the_inflator 13 hours ago [-]
Apple got so bad with its products, so bad indeed that they took a bet on the low price sector with the Neo and abandoned the powerhouses. It is so funny, because due to the high profit margin as a relative share of the price Apple earns more by selling a few top models than with dozens of Neos.
Tim Cook, the supply chain master leaves house the moment the very reason why he got hired in the first place is in dire straits.
I don’t think that the successor will likely change that, since Cook made sure, no one is remembering Jobs anymore and as top manager won’t pass a reversal of many of his decisions.
So he will lead through a CEO he controls. Only if the new guy takes on the battle in the name of product there might be a chance but this would mean, Cook and the new CEO have to be dismissed. So popcorn times, I think Apple is going to stay as boring as it got, while the quality constantly declines.
dwedge 12 hours ago [-]
The Neo won't sell dozens of models they will take the low end laptop market by storm. I think your comment will age very poorly
robertjpayne 12 hours ago [-]
This exactly. No other laptop comes close on price for the hardware you get. Yeah you may get more ram in a PC but promise you it won’t feel as fast when you’re using it day to day or have as good of a display or battery life.
ankurdhama 11 hours ago [-]
In same price range you can get a PC that not only has more ram but also has better multicore performance, better disk speed and better port selection. Yes neo wins on build quality, trackpad, speaker, display and battery life but the PC would also allow you to install any linux distro.
46493168 8 hours ago [-]
So the Neo wins on things people care about
stingraycharles 12 hours ago [-]
The Neo is already considered a huge success and is the reason for the scarcity.
swiftcoder 12 hours ago [-]
> so bad indeed that they took a bet on the low price sector with the Neo and abandoned the powerhouses
The Neo isn't just a bet on low prices - it's a machine that convinces people they can get away with less RAM. In the middle of a pricing crunch, why wouldn't you ship an 8GB machine like the Neo?
Its a win-win, Apple gets to ship a brand new SKU in volume despite the RAM crunch, and they get to punch into a previously untouched market.
lloeki 12 hours ago [-]
I'm hoping that the success of the Neo and the RAM shortage makes people realise that 8GB should be enough for most tasks without constantly swapping.
That 32GB or even 64GB is considered a minimum to be able to run some word processing, chat app, fetch remote content, and display funny cat photos is preposterous. In terms of information storage, these are absolutely immense numbers.
The infinite treadmill
of chasing for more RAM and then immediately proceeding to carelessly fill all of it at the first line of code is part of a deeper, wasteful, and self-imposed obsolescence process.
We don't need more RAM, we need more frugal software.
mert-kurttutan 7 hours ago [-]
I am curious. Where did you learn about 64 GB being considered minimum for those things ? I have never heard this.
I am able to do all of those things pretty fine with 16gb ram cheap msi laptop.
simonh 9 hours ago [-]
> so bad indeed that they took a bet on the low price sector with the Neo and abandoned the powerhouses
Says this on a post about the powerhouses all selling like hot cakes, with many months long waiting times.
I have Radarr and Sonarr running on my homeserver. I switched my model to cloud Claude, pasted the API docs of said apps and told it to make 'search, add, remove, update, and statusupdate' available in a small MCP.
It took 7 minutes, I switched back to my local Qwen3.6 model and I haven't touched the webinterface of Radarr and Sonarr in weeks. I just ask the model.
Everyone now gets a chat with my (telegram) AI bot in stead of relaying requests through me.
I have been looking into a decent local device. DGX Spark, Mac Studio etc... I think I am willing to spend on this, it really does feel like the 'iPhone moment' for me: I am not going back to individual front-ends for everything when my AI bot is a unified frontend for all API based software.
Overseerr is a thing.
Yeah I'll just have them text me and I'll paste that into my AI chat and it'll be done.
Authentication to Overseerr is done with a Plex account. No new accounts.
2) How does Overseerr help? I've never really understood it if I could give my family access to Overseerr over a VPN I could just give them sonarr / radarr directly.
It's nicer to use than Sonarr / Radarr, and a single interface for people to learn. It has suggestions for people that need that. There's also MCP for it, so instead of setting up N MCPs for Sonarr / Radarr / The Next Arr, you set one up for Seerr.
You can also link people's Plex Watchlist. So the whole request / add to Sonarr/Radarr can be done from anywhere they use Plex.
This of course assumes you already have Plex or Jellyfin exposed to the public internet without a VPN.
If you're already forcing users to use a VPN for Plex/Jellyfin then hosting Overseerr locally and making it accessible over the VPN should be trivial.
And then for ‘big’ work I tend to switch to a cloud offering. With a 128gb mac I would be 100% offline probably.
Using an ai for that is pretty wasteful.
OP said he does local inference, and you are complaining about 'wastefulness'. OP brought the request service to where his friends already were. For them, this is probably the better solution. Seerr cannot expose itself to discord/telegram and requires users to have another account. I mean, sure you can have the AI code those too.
There are plenty of ways to solve problems, and some people do it differently. Sometimes, just to see if it works. This is a good thing. Not everyone has the time nor inclination to maximize efficiency.
Seerr can use Plex accounts, assuming people are using OPs Plex library, there's no new accounts needed to be made.
> OP brought the request service to where his friends already were ...
Another thing Seerr can do is scan the watch lists of any connected Plex accounts, so assuming the friends are already using Plex, they'd already have access to it.
> Not everyone has the time nor inclination to maximize efficiency.
Wasn't that a thing LLMs were supposed to help with?
Though egress is heavily restricted for OpenClaw and everything is behind a FW.
So rather than having to go to a bunch of different websites or apps to get things done, I've linked them all to Hermes (via skills) and chat with the Hermes agent on my phone.
I want a movie? I just say download XYZ. Shows up in Plex 5 min later.
I want to research something with multiple different perspectives? Rather than going to OpenWebUI and using that, I just asked a Hermes agent to examine an issue from multiple different viewpoints and get back to me with a conclusion.
The 192GB M3 Ultra was on sale at the local Microcenter for $200 below what Apple's own site advertised. Since I knew the RAM shortage would significantly increase the price of the M5 Studio when (or if) it finally did come out, I decided to buy the M3. Time has shown that was the right decision.
Maybe by the time they sort it out there will be an M5 Ultra Mac Studio with a full terabyte of RAM.
Not sure who’s buying these or if it’s just people dreaming about finding a rube.
A M1 MacBook Air will perform virtually identically for most if not all laptop computing class tasks.
(If I were building a Jellyfin server today, I'd probably use a MacBook Neo.)
Given its RAM size I’m not going to be spinning up VMs, but in terms of general purpose computing it’s more than adequate. And, out of the box, you get a word processor, spreadsheet, presentation, video editing, digital audio, web browser, and a bunch of other things. Xcode is free. This is easily a laptop you can buy and use for years in 90% of settings.
The M4 and the neo share the same CPU architecture but the M4 has 4 performance cores at 4.4ghz, while the neo has 2 performance cores at 4ghz.
The neo also does not have any CPU heatsink so it thermal throttles after only a few seconds:
https://cdn.arstechnica.net/wp-content/uploads/2026/03/MacBo...
The actual code I am using for this is:
Also, there should be some universally accepted way to have access to your data and a secure personal computer in the duration your device is getting repaired.
Yes, exactly. When getting your car repaired there’s loaners or rentals to allow you to keep driving. Why isn’t a loaner computer a standard thing?
Apple doesn't make much of a fuss about it but their chip performance is laughably ahead of the other chipmakers.
The Mac Mini M4 gets a score of 3788 in Geekbench[0]. The top of the PC processor chart is 3395[1]. It's not even Apple's latest chip!
PC processors can only keep up by adding more cores, but real world performance in many workloads is enhanced by having a smaller number of higher performance cores.
[0]: https://browser.geekbench.com/mac-benchmarks
[1]: https://browser.geekbench.com/processor-benchmarks
But single-number outputs like that are useless. Is the number ~10% higher because it's consistently ~10% faster at everything, or because it's 100% faster on a minority of things and slower at everything else? The first one is pretty unlikely when comparing processors with different designs, and indeed that isn't it:
https://www.phoronix.com/review/apple-m4-intel-amd-linux/4
https://www.phoronix.com/review/apple-m4-intel-amd-linux/5
https://www.phoronix.com/review/apple-m4-intel-amd-linux/6
https://www.phoronix.com/review/apple-m4-intel-amd-linux/7
The CPU in those charts with a similar TDP to the M4 is the Ryzen HX 370. You can see that the M4 is ahead of it in a few of the tests (C-Ray, DuckDB, PyBench, FLAC) but in even more of them the M4 is at the bottom of the stack. (Only a third of those charts are actually performance; each performance chart is followed by two power consumption charts.)
And the ~20W TDP is a nice parlor trick (the HX 370 is the only one on the list that competes with it there) but in a desktop CPU that's pretty irrelevant. Whereas if you compare it to the CPUs that can be had for a similar price (e.g. Ryzen 9700X, 65W), it's only ahead in C-Ray and FLAC while losing quite badly in most of the others and subjecting you to unupgradable soldered memory that the PC hardware doesn't.
Meanwhile doing ray tracing on a CPU instead of a GPU isn't much fun, and FLAC is an audio codec so a ~10% improvement there is probably not going to be a big part of your day if you're not a full-time sound engineer. So does averaging those kinds of things in to make a single benchmark number make sense? Or should you be looking at the results on applications you actually use?
Which is obvious if you spent more then half a microsecond thinking about it, because apple silicone barely draws any power - it's performance is fantastic in it's niche, which is squarely within what a home user cares about - but it's not leading on benchmark performance, because that's not what apple designed it for
The reason its coincidentally good for local ai inference is also just down to the fact the embedded GPU has shared memory access to the system VRAM. That means low performance/throughput but large memory.
Which is great for home use, but once again not gonna top charts.
And the fact that's possible should've already proven that Apple's decided on trade-offs that did not enable bleeding Edge performance, hence not going to top benchmarks.
But aside from that amount of transfer ability you should've been able to manage, you're ignoring that apple silicone is still being beat on all performance benchmarks even with stock settings.
Apple chose a performance profile for their chips, and it's not "highest performance while sacrificing cooling and energy usage". Others did. And apple did well not chasing benchmarks, as that'd be the epitome of idiocity for their target market. They're not targeting high performance servers with massive cooling setups. They're targeting mobile workstation and entertainment devices.
They do not have any need for bleeding Edge performance trade-offs. They need power efficiency and enough performance to feel snappy on all workloads people will run on these devices - which isn't benchmarks. because none of their users _need_ highly sustained processing power. its just not something they'd ever target.
and im not even adressing the fact that geekbench is notorius for being absolutely shit at showing actual processing power.
If you want to run local models, another advantage is Apple’s unified memory architecture. The biggest Mac mini has 64gb ram and Mac Studio has up to 512gb. Compare this little box to what monster Nvidia gpu system you would have to buy to get the same memory there. And how much your PG&E bill would go up. That doesn’t account for the shortage of basic $600 Mac minis though.
Personally, I would rather pay a few bucks for Qwen or just use gemma4 which runs on a potato. But I guess we are all different.
The take away is that some of the Apple hardware hits a sweet spot for performance and price which may change in the future but for now it's causing a lot of demand so people can run inference without GPUs.
Also Macs keep a lot of their resale value so you can use them for a while and then sell them for sometimes 80% of their original value.
I recently bought one for my k3s cluster, and it was the cheapest 16g ram I could get by a decent margin.
Most likely the limiting factor is the crunch that chip companies are going through.
What annoys me most isn't the Mac Studio and Mini. It is the Neo. Someone must have done a poor job in demand planing. ( As well as pricing ). Only 5M unit till the end of the year when they are now increasing it to 10M. And it will likely miss this education's year cycle in the summer.
Hopefully they do better with A19 Pro Neo. Mac could reach up to 400M to 500M usage share. Roughly 25% of PC market.
The thing is that the Neo is actually useful.
I am old enough to remember the iPod nano -- Especially the 2nd generation. They were effectively low-priced and smaller iPods.
Apple sold millions of these much much quicker than the iPods and iPod minis (which came right before). Especially in 2006, it was _the_ "Christmas gift" just before the iPhone, iPod touch and later iPad mini took over. Possibly Steve Jobs' demo where he showed how they fit into the otherwise useless small jeans pocket helped convince the world.
The iPod nano effectively wiped out the competing music player market.
The Neo reminds me of the iPod nano and iPad mini. It is smaller and cheaper version of an existing successful product.
I think the iPhone SE and E are the outliers.
That said I remember everything you said and 100% agree - the nano killed everything around it. It’s been awhile since Apple had a similar home run; not an excuse for the clear lack of vision/leadership but a factor nonetheless.
Is the memory not part of the SoC?
I wanted to build a gaming PC, and now that's out of the question, even though I can afford to buy one in the current state. I just refuse to participate in this, so I quit.
There are thousands of great games that run on older hardware that would last me a lifetime of gaming.
Consumers always get the shit fed to them.
At some point even the most economically liberal people will say that enough is enough. Making money and building capital is perfectly okay if you're working hard, but if you use said money or capital against the rest of us (who chose a different life) then we have a problem.
My M2 studio was the only computer I ever owned that had issues with the USBC ports not working with certain cables (and for the price, it should have had better performance).
I've owned a M2 Mac Studio, PowerMac G5, Mac Pro. Every single one had flaws that you would consider inexcusable on PC Hardware priced half that amount.
The PowerMac G5 had terrible video cards (the liquid cooled ones also had issues with leaks, but ignoring that). The Mac Pro also had terrible Video cards (they were PCI-X), but also Fully Buffered ECC ram (which cost substantially more than any other ram)..
Apple still can't even manufacture a proper mouse (who the hell puts a USB C port at the bottom).
It's ridiculous..
If Linux distro's had a way to integrate Android as a first class citizen (like IOS is in MacOS), it would greatly boost the number of apps available in ecosystem, and have a huge impact on MacOS I feel. Waydroid is good, but, it still is too clunky (I'd like to see something more like Wine for Android, where its native)
Do you wish you could go back to macs?
Will you be around to police the discussion the next time something having "Microsoft" in the title is submitted and the shit-flinging-at-Windows-and-the-PC-world contest starts regardless to any relevance to the topic at hand?
Tim Cook, the supply chain master leaves house the moment the very reason why he got hired in the first place is in dire straits.
I don’t think that the successor will likely change that, since Cook made sure, no one is remembering Jobs anymore and as top manager won’t pass a reversal of many of his decisions.
So he will lead through a CEO he controls. Only if the new guy takes on the battle in the name of product there might be a chance but this would mean, Cook and the new CEO have to be dismissed. So popcorn times, I think Apple is going to stay as boring as it got, while the quality constantly declines.
The Neo isn't just a bet on low prices - it's a machine that convinces people they can get away with less RAM. In the middle of a pricing crunch, why wouldn't you ship an 8GB machine like the Neo?
Its a win-win, Apple gets to ship a brand new SKU in volume despite the RAM crunch, and they get to punch into a previously untouched market.
That 32GB or even 64GB is considered a minimum to be able to run some word processing, chat app, fetch remote content, and display funny cat photos is preposterous. In terms of information storage, these are absolutely immense numbers.
The infinite treadmill of chasing for more RAM and then immediately proceeding to carelessly fill all of it at the first line of code is part of a deeper, wasteful, and self-imposed obsolescence process.
We don't need more RAM, we need more frugal software.
I am able to do all of those things pretty fine with 16gb ram cheap msi laptop.
Says this on a post about the powerhouses all selling like hot cakes, with many months long waiting times.