fredag 29. mars 2013

Childish curiosity...


Childish curiosity...

My 8 year old just asked "What would the atmosphere of mars have to consist of for the planet not to lose it". Hmmmm..... Would any composition be heavy enough to not get stripped away? And could we in any way speculate about life forms that could survive in that atmosphere?

honestly...

honestly...

Originally shared by Lauren Weinstein

The First Honest Cable Company - This video is (a) Not Safe for Family Viewing, (b) Not Safe for Work Viewing, and (c) Pretty Much Entirely Accurate.
http://www.youtube.com/watch?v=0ilMx7k7mso

tirsdag 26. mars 2013

This seems so logical I have to reshare.


This seems so logical I have to reshare.

Originally shared by Cilla

I'm all about logic...I mean...huh?

Too good not to share from that other place.

It's springtime. First day going to work on my bike after the long winter. Badly needed exercise...


It's springtime. First day going to work on my bike after the long winter. Badly needed exercise...

Originally shared by Sven A Lindalen

LEB THIERRY was tagged in Sven A Lindalen's album.

torsdag 21. mars 2013

Food is expensive in norway. goes for dog and cat food as well.

Food is expensive in norway. goes for dog and cat food as well.

http://www.businessweek.com/articles/2013-03-20/animal-planet#r=rss
http://www.businessweek.com/articles/2013-03-20/animal-planet#r=rss

A must read for any parent of a girl.

A must read for any parent of a girl.

Of course we want to protect the little princess, but the best protection is to let her learn to live her own life, right?

Originally shared by God Emperor Lionel Lauer

All the pearl-clutching over what girls wear, etc, is missing the point.
Excerpt:
But this is precisely what is harming girls – not the length of their shorts, not who they share photos with, not who they have sex with – it's us, society, the grown-ups who are the cause of their malaise by demanding perfection from them and denying them the safety net afforded to boys (who don't have people victimising them nor fretting over their sexualisation). By wrapping girls in cotton wool, we deny them the right to stuff things up and then learn how to put it back together again.
http://www.dailylife.com.au/life-and-love/parenting-and-families/raising-girls-as-victims-20130320-2gebm.html

tirsdag 5. mars 2013

I don't dare comment on this. Period.

I don't dare comment on this. Period.

Originally shared by Helge D


PMS jokes aren't funny. Period.

mandag 4. mars 2013

I have always had somewhat mixed feelings when talking about ubuntu.

I have always had somewhat mixed feelings when talking about ubuntu. mostly because I have never tried it. Not once. In many ways that disqualifies me from saying anything about the user friendlyness (I understand it's good) and Unity (a little more mixed I understand). My reason for staying away has always been that Canonical seem to suffer from some kind of amnesia in all their PR. Where do you find any mention of Linux or GNU? No, it's all Ubuntu. Even 'Ubuntu kernel'!

This is the third article here on G+ today in my circles about the same issues. As with the posters, I do not deny Canonical the right to build their own silo or otherwise make money on Linux. That is just fine. But I must say Red Hat feel like they are contributing immensely more back to the open source community than canonical does. They also let Fedora free, so they do not suffer from the kind of top-down attitude Canonical seems to have with their community.

This piece by Aaron Seigo is very good. Even if I haven't tried KDE for years his points should be easily understood.

Originally shared by Aaron Seigo

I'm not one to shy away from discussion for fear of being right or wrong, but simply in the attempt to actually get things straight and share my current best understanding with others. A week or two ago I wrote a bit here about Canonical's vision for device-spectrum computing and how it was nowhere to be seen in their code, at least not in the fashion that their PR bits would like people to believe.

So today we get a fresh new bunch of information (see link below), and here's a not-so-quick review of what this new info brings to the discussion.

One of the important assertions made is that convergence is achieved by having a single display server and application API across devices. If this were so, then we'd have it today already since we commonly use X.org and various toolkits across device formfactors. XFCE on X.org would be a convergence UI. Which it isn't. So that's rubbish and should simply be ignored as any sort of valid motivation for writing a new display system.

The existing application API also does not provide what is needed for device-spectrum interface development. I won't go into why that is since it would not only make this posting even longer, but because I have no desire to help Canonical by pointing out actionable problems in their stack. Why this is so will become apparent below.

Where Canonical's claim to be working on a "convergence" system does start to conform to reality is the idea of a single desktop shell and window manager that adjusts depending on context. Interestingly, that's what we've been doing in Plasma Workspaces for a few years now. So that part is not rubbish.

(By the way: the practice of changing the names on everything they use that could remotely be traced back to others says something about the thought processes. See, they don't use Status Notifiers .. no, they have Application Indicators. They don't do "device spectrum" they do "device convergence", etc. It would be understandable if they were the first to any of these ideas, but they aren't; and it makes having conversations about these issues so much more complex than it needs to be. Anyways ...)

Turns out Canonical is working towards that vision by re-writing desktop-Unity  in Qt/QML. They state that there will be some UI pieces that are form factor specific, and that's sensible; nothing to quibble with there. This rewrite is still nowhere to be seen, and given the number of direction changes we've seen this far it's anything but a sure bet, but I'll give them the benefit of the doubt here. Once they have that beast done (April 2014 is the target, apparently) they will possibly be one step closer to having the device spectrum technology I called them out on. So it is possible that in a year's time I will be wrong on that one part. It's also possible that their shell doesn't .. well .. again, I really don't feel like pointing out the obvious (to me, anyways) technical limitations in their current designs.

The truly crazy part is that they are writing their own display manager to accomplish this. They dismiss Wayland, though it has pretty much the same design. The main differences are that Canonical doesn't control Wayland development and Canonical's system will weld everything into one process: display manager, desktop shell, window management, output management, input event handling ... It's an interesting approach. Not one I'd take for technical reasons, but hey ..

The biggest issue I see is that they are going it on their own and diverging from the rest of the Free software ecosystem with a software stack they have been developing behind closed doors and which will require you to sign over your copyright in order for you to contribute to it.

They have effectively sealed themselves off from the rest of the Free software world. They will shoulder porting and maintaining Qt, Gtk+, XUL, etc. to their system. They will shoulder porting applications to the integration points (most of which will be delivered in Qt apps). They will not be sharing desktop shell infrastructure with anyone else, and using their Free software on other platforms will become increasingly more difficult.

All of this depends on a couple of high level designs that are filled mostly with TODOs. So perhaps in a year's time (probably more) they will have delivered what they have said they are doing right now, and my objection will then be erase. If that happens, it will be at the cost of becoming another "not really Linux" Linux that lives in its own universe. It will be Android minus Google.

Before closing, I would like to point out that Canonical is once again trying to rewrite the present as well as history as can be seen in the intro of the UnityNextSpec: "From the very beginning, Unity's concepts were tailored with a converged world in mind, where the overall system including the UI/UX scales across and adapts to a wide variety of different form factors." Looking at the very beginning of Unity right up until today, this is obvious nonsense. The rest of the spec spends its time explaining why none of the original design decisions in Unity will also translate other than it being a shell "a shell, with a launcher, indicators, switcher, dash etc.". Well .. yeah.

Moreover, in http://www.markshuttleworth.com/archives/661 Mark Shuttleworth stated that "Unity was simply the new name for work which has been ongoing since 2007: The Ubuntu Netbook Remix interface." and if we look at the original UNR none of these principles are seen to be at work. None. Eventually UNR was made to use Unity but still with none of the ideas apparent in the implementation. Ubuntu TV is similar in this regard.

Also keep in mind how the move to Qt was positioned by Canonical: "The decision to be open to Qt is in no way a criticism of GNOME. It’s a celebration of free software’s diversity and complexity. [..]  Our work on design is centered around GNOME, with settings and preferences the current focus as we move to GNOME 3.0 and gtk3." From http://www.markshuttleworth.com/archives/568 just two years ago.

What is perhaps a more accurate statement is that Canonical fumbled through various ideas and technologies iteratively until they landed on the current concepts, dictated in part by technology and in part by business. There is nothing wrong with this, it's often how creativity happens. It is not, however, the 6 year prescience being claimed .. especially as we're still at least one year away from possibly seeing the current vision being achievable in practice.

So we have a new separate silo competing with the rest of the silos plus the open efforts (e.g. Wayland) while we are asked to accept a rewrite to a history Canonical is evidently not proud of ... "but this time it will be different, guys!"

All that said, by next spring, I might end up being wrong about the shell being form factor specific. If that does end up being the case, I apologize in advance, and I will reiterate that statement if it comes to pass.

(To be clear: Other than the shell itself, I stand by my claims as strongly as I did a week ago. That is based on reading code, going through design documents and published specs. I would suggest that if you haven't done that, you aren't in a place to offer a critique.)
https://lists.ubuntu.com/archives/ubuntu-devel/2013-March/036776.html

søndag 3. mars 2013

Good morning, America. This is your wake-up call.

Good morning, America. This is your wake-up call.
http://youtu.be/QPKKQnijnsM
http://www.youtube.com/watch?v=QPKKQnijnsM

Discussions about how to efficiently run Linux in corporate networks pop up from time to time.

Originally shared by birger monsen

Discussions about how to efficiently run Linux in corporate networks pop up from time to time. I worked on such a setup some years ago, so here are a few of the thoughts we used to build our system.

Merge into existing infrastructure

Existing infrastructure at our site used a PXE-booted BartPE (customized mini windows environment) app that asked the person at the console a few basic questions before starting installation of windows. We had the Windows guys modify their installer with a windows/linux choice, and if linux was selected the user was offered a possibility to override the swap partition size (default was computed based on physical ram and available disk). The BartPE installer then patched in a small partition with anaconda set up to do a kickstart install.

The support-people who handled on-site support for windows could easily be trained to handle any linux-related tasks as well. Mostly it was just installing or removing hardware, and with linux they didn't even have to worry about installing drivers.

We also used LDAP/Kerberos for centralized administration. We had our own OpenLDAP based server since we were uneasy about having all clients depend on AD as LDAP/Kerberos server. What if some Microsoft patch one night rendered all Linux workstations inoperable next day? So we set up our OpenLDAP with replication from AD, enabling us to use existing data in AD. Today we would probably have gone with direct attachment to AD, sine at least Fedora now works very well directly with AD through realmd and sssd (winbind not needed anymore) and can work offline if AD should act up.

Printing was greatly simplified since a 'follow me' print system was already used for windows. Using the same centralized user administration enabled us to just offer the follow me print queue and spool to existing print servers. Users then just swiped their card at any printer to fetch their documents.


Modularity

During kickstart we installed a bare minimum to enable people to start working. Today the list would be Gnome, Chrome, LibreOffice and Evolution. That way the workstation would boot and be operational within 10 minutes. after it was first PXE-booted for install. 10 years ago that was fast! Additional software to install was then determined based on LDAP group membership of the host. KDE was always installed as alternative desktop. For some departments the installation of the rest of the software could take hours, but happened in the background.

We mirrored all external repositories we used into our 'alpha' repos. We also had one repo for our own add-on modules. The Linux desktop team (all 3 of us) had workstations that got their updates directly from the alpha repos. When new updates didn't crash our PCs the updates were migrated to the 'beta' repos. Through AD/LDAP memberships 1 or 2 workstations at each department/faculty were set up to use the beta repos. The users of those workstations knew they had a responsibility to react quickly if something broke. After a week packages would then migrate from beta to prod and find their way to all workstations. This setup enabled us to run a bleeding-edge distro like Fedora safely across a whole university.

We didn't use puppet back then, but would definitely do it today. The puppet-server for your workstations should not be the same as for your servers. The whole philosophy is too different, and your rulesets should be have completely different strategies. On the other hand you may want to use the same puppet server for both Linux and Mac clients.


support

We decided that since gnome had facilities for centralized profiles that could do full or partial lock-down of the desktop as well as selecting profiles based on group membership it would be our 'supported' desktop. KDE would be supported in the sense that we would make sure it worked, but the help desk only had procedures for helping with gnome problems. KDE users would have to help each other. This worked very nicely since (as we expected) KDE users were least inclined to have any kind of centralized configuration. they wanted to tweak everything.

With fedora it was easy to let users have the ability to install any package they wanted from the pre-configured repositories. In our case that would be our own beta and prod repos.


conclusion

A system like this could be set up by 3 linux specialists, and could be run long-term by 2. Given a help-desk that could handle the easy problems as well as dispatching people for problems that needed hands-on the tasks for the linux desktop group were mainly
- test new hardware proactively
- create custom configurations for packages
- package software for our local repo
- 2nd level support

Discussions about how to efficiently run Linux in corporate networks pop up from time to time.

Discussions about how to efficiently run Linux in corporate networks pop up from time to time. I worked on such a setup some years ago, so here are a few of the thoughts we used to build our system.

Merge into existing infrastructure

Existing infrastructure at our site used a PXE-booted BartPE (customized mini windows environment) app that asked the person at the console a few basic questions before starting installation of windows. We had the Windows guys modify their installer with a windows/linux choice, and if linux was selected the user was offered a possibility to override the swap partition size (default was computed based on physical ram and available disk). The BartPE installer then patched in a small partition with anaconda set up to do a kickstart install.

The support-people who handled on-site support for windows could easily be trained to handle any linux-related tasks as well. Mostly it was just installing or removing hardware, and with linux they didn't even have to worry about installing drivers.

We also used LDAP/Kerberos for centralized administration. We had our own OpenLDAP based server since we were uneasy about having all clients depend on AD as LDAP/Kerberos server. What if some Microsoft patch one night rendered all Linux workstations inoperable next day? So we set up our OpenLDAP with replication from AD, enabling us to use existing data in AD. Today we would probably have gone with direct attachment to AD, sine at least Fedora now works very well directly with AD through realmd and sssd (winbind not needed anymore) and can work offline if AD should act up.

Printing was greatly simplified since a 'follow me' print system was already used for windows. Using the same centralized user administration enabled us to just offer the follow me print queue and spool to existing print servers. Users then just swiped their card at any printer to fetch their documents.


Modularity

During kickstart we installed a bare minimum to enable people to start working. Today the list would be Gnome, Chrome, LibreOffice and Evolution. That way the workstation would boot and be operational within 10 minutes. after it was first PXE-booted for install. 10 years ago that was fast! Additional software to install was then determined based on LDAP group membership of the host. KDE was always installed as alternative desktop. For some departments the installation of the rest of the software could take hours, but happened in the background.

We mirrored all external repositories we used into our 'alpha' repos. We also had one repo for our own add-on modules. The Linux desktop team (all 3 of us) had workstations that got their updates directly from the alpha repos. When new updates didn't crash our PCs the updates were migrated to the 'beta' repos. Through AD/LDAP memberships 1 or 2 workstations at each department/faculty were set up to use the beta repos. The users of those workstations knew they had a responsibility to react quickly if something broke. After a week packages would then migrate from beta to prod and find their way to all workstations. This setup enabled us to run a bleeding-edge distro like Fedora safely across a whole university.

We didn't use puppet back then, but would definitely do it today. The puppet-server for your workstations should not be the same as for your servers. The whole philosophy is too different, and your rulesets should be have completely different strategies. On the other hand you may want to use the same puppet server for both Linux and Mac clients.


support

We decided that since gnome had facilities for centralized profiles that could do full or partial lock-down of the desktop as well as selecting profiles based on group membership it would be our 'supported' desktop. KDE would be supported in the sense that we would make sure it worked, but the help desk only had procedures for helping with gnome problems. KDE users would have to help each other. This worked very nicely since (as we expected) KDE users were least inclined to have any kind of centralized configuration. they wanted to tweak everything.

With fedora it was easy to let users have the ability to install any package they wanted from the pre-configured repositories. In our case that would be our own beta and prod repos.


conclusion

A system like this could be set up by 3 linux specialists, and could be run long-term by 2. Given a help-desk that could handle the easy problems as well as dispatching people for problems that needed hands-on the tasks for the linux desktop group were mainly
- test new hardware proactively
- create custom configurations for packages
- package software for our local repo
- 2nd level support

lørdag 2. mars 2013

Help this penguin fly.

Help this penguin fly.

We need real news about Linux. Not just the ongoing cut&paste between the so-called tech journals. These geeks have shown through a few short weeks that they publish original content. It may be worth a donation to keep them going.

http://www.indiegogo.com/projects/support-linux-advocates?c=home
http://www.indiegogo.com/projects/support-linux-advocates?c=home

Money=Work/Knowledge


Money=Work/Knowledge