Hacker Newsnew | past | comments | ask | show | jobs | submit | jeffgus's commentslogin

Why not?

Seems to me that IBM is trying to differentiate their AI from others by offering a complete solution that can be self-hosted.

The part that cannot be produced locally, the core model, IBM publishes the inputs and indemnifies. From there, IBM and RedHat provide a holistic solution for fine-tuning, pipelines, and a platform to run on (RHEL AI and OpenShift) that endeavors to "just work."

The fine-tunning tool can be used on any model from Huggingface. So could the pipeline tooling. IBM is saying that they are trying to build core models that are more friendly to fine-tuning. Smaller, faster, and more focused.

They are not trying to be OpenAI or Anthropic, but the IBM models could be a really nice fit for many applications.


I believe the parent comment was in reference to IBM Watson.


yeah, could be. IBM has been working on AI for ages. They have been able to make a splash with events in chess and Jeopardy, but can't seem to keep up the interest. Of only IBM had the marketing skills as good as some of their research.


Ah, AOLServer! I remember using it when it was NaviServer. I helped to launch my employer's website (packardbell.com) on NaviServer on Solaris way back then. Some of the things I liked:

* It had a decent HTML editor that our marketing people could easily use to update news items on the website. The editor talked directly to NaviServer in a WebDAV-ish way to browse and edit the files.

* It was packaged with Illustra RDBMS for full-text search of the hosted website. Never heard of Illustra? Neither had I back then, but when I fired up the Illustra CLI, it looked awfully familiar! I quickly learned that Illustra was a commercialized version of Postgres.

* I emailed the vendor at the time for a Linux port. The response was that Linux didn't have proper threading libraries. This is back when Linux had sub-par support for threads in the libraries. I replied with a pointer to a thread on LKML about Linux kernel-level threading (IIRC, clone()) and the response was that the functionality looked similar to Irix, and they would try a port. They released a Linux port not long after.

* NaviServer didn't just serve HTML pages. It was also a basic application server due to integrated tcl. I never really mastered tcl, but I was able to build a basic repair center lookup page on the site.

All this back in the mid-90's!

In many ways, NaviServer was way, way, way ahead of its time.


But Netflix was NOT paying transit. They were paying for peering. Transit costs more. To reduce costs, they peered with large networks. The problem is that the company they paid to setup peering (Cogent) didn't want to risk their settlement-free peering agreements. Cogent would have had to start paying for peering. It turned out it was much better for Netflix to setup their own peering agreements.


Incorrect. Netflix purchased transit from Level 3 beginning in 2010 and subsequently with Cogent.[1][2]. They then went on to purchase additional transit from Telia, NTT and Tata in order to get routes into the eyeball networks that began making a stink. They eventually agreed to do paid peering with the last mile eyeball networks - AT&T Verizon, Time Warner Cable and Comcast. It seem as though you not understanding peering and transit. Had they been paying for peering there wouldn't have been an issue in the first place. Paid peering was the very thing the last mile networks wanted.[3].

Netflix prior to the roll out of their Open Connect CDN offering used Akamai, Limelight, and Level 3 as their CDN providers. When they rolled out Open Connect they offered ISPs the ability to peer directly with them at number of different peering exchanges.[4] This was a few years after the Level 3 Comcast spat. You seem to be confusing events.

[1] https://qz.com/256586/the-inside-story-of-how-netflix-came-t...

[2] https://archive.is/2AC6C#selection-735.154-735.169

[3] https://arstechnica.com/tech-policy/2014/06/fcc-gets-comcast...

[4] https://techcrunch.com/2012/06/04/netflix-open-connect/


You are correctly. I was imprecise. They were trusting another company to peer for them. They thought they were purchasing "CDN services". The problem is that one of those companies was Cogent. Cogent prides itself with settlement-free peering. But since Netflix put so much traffic onto Cogent's network, it caused traffic to push beyond the peering agreements. Cogent didn't want to pay for peering. Once Netflix took on peering themselves, things went much, much better and the customers have been pretty happy ever since.

If Netflix understood peering from the beginning, they wouldn't have ran into these issues and might have saved themselves some money.

Peering is what saved the Internet back in the late 90's through early 2000's and proved Metcalfe wrong. People using the Netflix problem to push for NN were wrong.


That is my reaction. Am I missing something here? Are Net Neutrality rules getting in the way?

I know when the NN topic was really hot, many were conflating treating the packets the same with peering agreements. Peering has nothing to do with NN.

Years ago people were complaining about Netflix performance issues and thinking that the ISP was throttling Netflix when it was an issue with peering capacity.

If the rules in the EU mess with peering negotiations, then the rules need to be fixed.


> I mean, sure, if you define "right" in that way, then you can say you have the "right" to literally anything.

If? No, that is how rights are defined in the US. As long as a right doesn't violate the right of another person. This is what makes the US unique. The US government doesn't give us rights, we already have them. The constitution grants limited powers to the government. That is why it wouldn't matter of the 2nd amendment disappeared, we already have the right to self-defense. The 2nd just called out that the fact government doesn't have the power to remove that right.

If the government moves to take guns away and kills people to do it. In other words, if they try to warp the meaning of our rights, then that is the another reason to be armed.


Yeah, docker, initially, didn't use SELinux, but that was before RedHat took interest. RedHat likes making things more secure with SELinux.


Wouldn't it have been easier to check the audit.log? Pump out the contents to audit2allow and you will have a nice new config that would allow your setup. Heck, it would even tell you if there is already a boolean for that config.

I really don't see any reason to disable SELinux. Maybe back in RHEL5 days, but not since then. Just educate yourself on some tools. It really isn't that hard.


Sure, it would have been easier for me to check the audit log, however the idea that it was an issue with SELinux didn't even cross my mind until I used strace. The vast majority of Linux systems I work on do not have it enabled.

You may not see a reason to disable SELinux, but not all Linux systems are RHEL, and don't have it enabled to begin with. I personally would not enable it on a system that did not design for it as a default.


Did you consult the audit.log? Did you pump it through audit2allow? The audit2allow tool will even tell you if the issue can be fixed by setting a boolean on the SELinux config. Most of the stuff I have run into recently can be fixed by setting a boolean.

Yeah, SELinux does require learning it, but it adds a lot. I recently helped a friend of mine fix his PHP CMS because it was hacked. The PHP was high jacked and it started attacking other instances in hosting providers network. If only he had not turned off SELinux, it would have prevented outgoing connections from the http server.


I have always run SELinux on Fedora/CentOS/RHEL and I don't remember a time where I had issues with authorized_keys. The only thing recently I recall about ssh is that is complains if the files in .ssh are not mode 600.

SELinux has come a long way since RHEL4 days.


I'm not talking about RHEL4. I'm talking recent Fedoras (within the last year or 2). Making a brand new ~/.ssh/authorized_keys file has never worked for me without running restorecon (which is the thing I can never remember).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: