February 15, 2013

spacer

ssh-import-id now supports -r|--remove keys

Dustin Kirkland

As a brief followup to my recent post about ssh-import-id now supporting Github in addition to Launchpad, I should also mention that I've also added a new feature for removing keys that were previously imported.

Here's an example, importing kirkland's public keys from Launchpad.

kirkland@x220:~$ ssh-import-id lp:kirkland
2013-02-15 14:53:46,092 INFO Authorized key ['4096', 'd3:dd:e4:72:25:18:f3:ea:93:10:1a:5b:9f:bc:ef:5e', 'kirkland@x220', '(RSA)']
2013-02-15 14:53:46,101 INFO Authorized key ['2048', '69:57:f9:b6:11:73:48:ae:11:10:b5:18:26:7c:15:9d', 'kirkland@mac', '(RSA)']
2013-02-15 14:53:46,102 INFO Authorized [2] SSH keys

And now let's remove those keys...

kirkland@x220:~$ ssh-import-id -r lp:kirkland
2013-02-15 14:53:49,528 INFO Removed labeled key ['4096', 'd3:dd:e4:72:25:18:f3:ea:93:10:1a:5b:9f:bc:ef:5e', 'kirkland@x220', '(RSA)']
2013-02-15 14:53:49,532 INFO Removed labeled key ['2048', '69:57:f9:b6:11:73:48:ae:11:10:b5:18:26:7c:15:9d', 'kirkland@mac', '(RSA)']
2013-02-15 14:53:49,532 INFO Removed [2] SSH keys

Neat!

So the way this works is that ssh-import-id now adds a comment to the end of each line it adds to your ~/.authorized_keys file, "tagging" the keys that it adds.  When removing keys, it simply looks for keys tagged accordingly.

Enjoy!

:-Dustin
on February 15, 2013 09:02 PM
spacer

Happy Steam Launch, celebrate with frags!

Marco Ceppi

spacer

Blow off some Steam spacer , now officially released for Linux, by joining this Canonical sponsored Team Fortress 2 server deployed with Juju!

on February 15, 2013 05:11 PM
spacer

Spending whole day with just Chromebook

Marcin Juszkiewicz

Today I work from Berlin (visiting Daniel Holbach) and took only Chromebook with me to check how bad/good it works as laptop replacement for me.

First issues appeared during first minutes. It was keyboard. Or rather keys which are missing there. XFCE terminal (my main tool) switches between tabs with Ctrl-PgUp/PgDn but I lack those keys. Good that I can edit GTK shortcuts. But remove of them is possible only with Delete key. And guess what — Chromebook lacks it as well ;D So I used some crazy Emacs like shortcuts (Ctrl-LAlt-Shift-something).

Good thing is support for 5GHz WiFi. I have to consider such change at home and provide not only 2.4GHz but also 5GHz network (I have around twenty 802.11g ones at home).

Terrible issue is power plug detection. I took Chromebook from backpack, booted it and got “97% charged, AC connected” message during work on battery. It is serious problem as no one likes to have random shutdowns just because battery went flat.

So there are few things to do:

  • better keymap
  • fixed power state detection

And then I can go to Hong Kong (for Linaro Connect Asia) with Chromebook only.


All rights reserved © Marcin Juszkiewicz
Spending whole day with just Chromebook was originally posted on Marcin Juszkiewicz website

on February 15, 2013 04:20 PM
spacer

Wow with 4.10

Jonathan Riddell

KDE Project:

  • DCOP

A nice e-mail I received about KDE SC 4.10...

Hi there,
Wasn't sure where to send this email but wanted to send a huge thank you
to the Kubuntu team. I have no idea what happened in 4.10 but my God
everything is fast!! Everything loads so quickly and so stable so far.
This is by far the best release I have ever used for a linux distribution.
I don't see windows 7 being used in the coming weeks.
Please forward this to those who would really appreciate it!! Thank you
again and please keep up the amazing work!!
Cheers,
Asa

on February 15, 2013 03:25 PM
spacer

Ubuntu OpenStack Activity Update, February 2013

James Page

Folsom 2012.2.1 Stable Release Update

A number of people has asked about the availability of OpenStack 2012.2.1 in Ubuntu 12.10 and the Ubuntu Cloud Archive for Folsom; well its finally out!

Suffice to say it took longer that expected so we are making some improvements to the way we manage these micro-releases going forward which should streamline the process for 2012.2.3.

Cloud Archive Version Tracker

In order to help users and administrators of the Ubuntu Cloud Archive track which versions of what are where, the Ubuntu Server team are now publishing Cloud Archive reports for Folsom and Grizzly.

Grizzly g2 is currently working its way into the Ubuntu Cloud Archive (its already in Ubuntu Raring) and should finish landing into the updates pocket next week.

News from the CI Lab

We now have Ceph fully integrated into the testing that we do around OpenStack; this picked up a regression in Nova and Cinder in the run up to 2012.2.1.

This highlights the value of the integration and system testing that we do in the Ubuntu OpenStack CI lab (see my previous post for details on the lab). Identifying regressions was high on the list of initial objectives we agreed for this function!

Focus at the moment is on enabling testing of Grizzly on Raring (its already up and running for Precise) and working on an approach to testing the OpenStack Charm HA work currently in-flight within the team. In full this will require upwards of 30 servers to test so we are working on a charm that deploys Ubuntu Juju and MAAS (Metal-as-a-Service) on a single, chunky server, allowing for physical-server-like testing of OpenStack in KVM. For some reason seeing 50 KVM instances running on a single server is somewhat satisfying!

This work will also be re-used for more regular, scheduled testing outside of the normal build-deploy-test pipeline for scale-out services such as Ceph and Swift – more to follow on this…

Ceilometer has also been added to the lab; at the moment we are build testing and publishing packages in the Grizzly Trunk PPA; Yolanda is working on a charm to deploy Ceilometer.

Ceph LTS Bobtail

The next Ceph LTS release (Bobtail) is now available in Ubuntu Raring and the Ubuntu Cloud Archive for Grizzly.

One of the key highlights for this release is the support for Keystone authentication and authorization in the Ceph RADOS Gateway.

The Ceph RADOS Gateway provides multi-tenant, highly scalable object storage through Swift and S3 RESTful interfaces.

Integration of the Swift protocol with Keystone completes the complementing story that Ceph provides when used with OpenStack.

Ceph can fulfil ALL storage requirements in an OpenStack deployment; its integrated with Cinder and Nova for block storage, Glance for image storage and can now directly provide integrated, Swift compatible, multi-tenant object storage.

Juju charm updates to support Keystone integration with Ceph RADOS Gateway are in the Ceph charms in the charm store.


spacer spacer
on February 15, 2013 02:50 PM
spacer

It takes two

Daniel Holbach

At the last UDS we talked quite a bit about LoCo teams in during the Leadership Mini Summit. One interesting point was that many seemed to have the impression that events have to be big, everything has to follow an established protocol or a rigid process. That’s not the case.

I’m sure my friend Jorge Castro would agree with me if I told you to JFDI. The result of not doing things is that things will not get done. Setting up an event is sometimes just a matter of sending a mail to the team and asking everyone to come to a certain place at a certain date and time. Another point discussed was the number of people. Seriously, if it’s just two of you who hang out and make Ubuntu better or just have a good time together, that’s so much better than not meeting at all. spacer

The reason I write all of this is that we’re getting closer to Ubuntu Global Jam again and some of you might be considering setting up an event and adding it to the LoCo Team Portal and you might still be a bit unsure. There’s really no need to.

It’s very very likely you don’t need a huge venue with lots of bells and whistles, maybe just meeting in a coffee shop will be good enough? A room in your local university? Or invite people to your place? Just somewhere with internet might be good enough. You might get to know some new local team members and it’s all about having a good time.

We have instructions up how to set up a jam, a video, and you can always ask for advice. Join the Ubuntu Global Jam today!

on February 15, 2013 12:17 PM
spacer

Planning for Google Summer of Code

Valorie Zimmerman

Now that Google has announced GSoC 2013, we will soon hear the rules and schedule. It's never too early to plan for your participation, whether as a student, a mentor, or KDE administration team member.

I recently read an interesting book about time and how we perceive it, based on recent neuroscience: Time Warped: Unlocking the Mysteries of Time Perception, by Claudia Hammond. The section on planning projects seemed applicable for both mentors and students. Of course, for students, this is the time to get involved with the KDE community, figure out what projects look interesting, and start the learning the development process.

The crucial resource is the Handbook, written by participants. There is a Mentor's Guide and Student's Guide, also available in ebook format.

In addition to the Handbook, Time Warped offers some valuable insight into the Planning Fallacy which is the tendency to believe that the job will take less time than eventually does. The admins and mentors work with students to create a realistic and detailed timeline, which is one of the important ways to outwit this human tendency. Hammond suggests that you consider your plan and then compare the parts to projects you have done in the past, to fine-tune your time frame to completion. Hammond warns against the common belief that we will have more time in the future than we have now. This caution is very important for mentors too. And it is one reason KDE always tries to have at least one back-up mentor for each accepted project, as well as the teams for general help.

Finally, Hammond suggests that since other people make more accurate judgements about our time, describe the task to a friend and ask them to guess how long it will take you. Those who have mentored before can help new mentors with this, and students can ask those who have seen their previous programming work to help judge the prospective plan.

I look forward to seeing KDE folks, experienced and brand-new, getting to know one another, and digging into the code.
on February 15, 2013 08:36 AM
spacer

Can Ubuntu Server Roll Too?

Robbie Williamson

Wow…I just realized how long it’s been since I did a blog post, so apologies for that first off.  FWIW, it’s not that I haven’t had any good things to say or write about, it’s just that I haven’t made the time to sit down and type them out….I need a blog thought transfer device or something spacer .  Anyway, with all the talk about Ubuntu doing a rolling release, I’ve been thinking about how that would affect Ubuntu Server releases, and more importantly….could Ubuntu Server roll as well?  In answering this question, I think it comes down to two main points of consideration (beyond what the client flavors would already have to consider).

 

How Would This Affect Ubuntu Server Users?

We have a lot of anecdotal data and some survey evidence that most Ubuntu Server users mainly deploy the LTS.  I doubt this surprises people, given the support life for an LTS Ubuntu Server release is 5 years, versus only 18 months for a non-LTS Ubuntu Server release.  Your average sysadmin is extremely risk adverse (for good reason), and thus wants to minimize any risk to unwanted change in his/her infrastructure.  In fact, most production deployments also don’t even pull packages from the main archives, instead they mirror them internally to allow for control of exactly what and when updates and fixes roll out to internal client and/or server machines.  Using a server operating system that requires you to upgrade every 18 months, to continue getting fixes and security updates, just doesn’t work in environments where the systems are expected to support 100s to 1000s of users for multiple years, often without significant downtime. With that said, I think there are valid uses of non-LTS releases of Ubuntu Server, with most falling into two main categories: Pre-Production Test/Dev or Start-Ups, with the reasons actually being the same.  The non-LTS version is perfect for those looking to roll out products or solutions intended to be production ready in the future.  These releases provide users a mechanism to continually test out what their product/solution will eventually look like in the LTS as the versions of the software they depend upon are updated along the way.  That is, they’re not stuck having to develop against the old LTS and hope things don’t change too much in two years, or use some “feeder” OS, where there’s no guarantee the forked and backported enterprise version will behave the same or contain the same versions of the software they depend on.  In both of these scenarios, the non-LTS is used because it’s fluid, and going to a rolling release only makes this easier…and a little better, I dare say.  For one, if the release is rolling, there’s no huge release-to-release jump during your test/dev cycle, you just continue to accept updates when ready.  In my opinion, this is actually easier in terms of rolling back as well, in that you have less parts moving all at once to roll back if needed.  The second thing is that the process for getting a fix from upstream or a new feature is much less involved because there’s no SRU patch backporting, just the new release with the new stuff.  Now admittedly, this also means the possibility for new bugs and/or regressions, however given these versions (or ones built subsequently) are destined to be in the next LTS anyway, the faster the bugs are found out and sorted, the better for the user in the long term.  If your solution can’t handle the churn, you either don’t upgrade and accept the security risk, or you smoke test your solution with the new package versions in a duplicate environment.  In either case, you’re not running in production, so in theory…a bug or regression shouldn’t be the end of the world.  It’s also worth calling out that from a quality and support perspective, a rolling Ubuntu Server means Ubuntu developers and Canonical engineering staff who normally spend a lot of time doing SRUs on non-LTS Ubuntu Server releases, can now focus efforts on the Ubuntu Server LTS release….where we have a majority of users and deployments.

 

How Would This Affect Juju Users?

In terms of Juju, a move to a rolling release tremendously simplifies some things and mildly complicates others.  From the point of view of a charm author, this makes life much easier.  Instead of writing a charm to use a package in one release, then continuously duplicating and updating it to work with subsequent releases that have newer packages, you only maintain two charms…maximum of three if you want to include options for running code from upstream.  The idea is that every charm in the collection would default to using packages from the latest Ubuntu Server LTS, with options to use the packages in the rolling release, and possibly an extra option to pull and deploy direct from upstream.  We already do some of this now, but it varies from charm to charm…a rolling server policy would demand we make this mandatory for all accepted charms.  The only place where the rules would be slighlty different, are in the Ubuntu Cloud Archives, where the packages don’t roll, instead new archive pockets are created for each OpenStack release.  From a users perspective, a rolling release is good, yet is also complicated unless we help…and we will.  In terms of the good, users will know every charmed service works and only have to decide between LTS and rolling as the deployment OS, where as now, they have to choose a release, then hope the charm has been updated to support that release.  The reduction in charm-to-release complexity also allows us to do better testing of charms because we don’t have to test every charm against oneiric, precise, raring, “s”, etc, just precise and the rolling release….giving us more time to improve and deepen our test suites.

With all that said, a move to a rolling Ubuntu Server release for non-LTS also adds the danger of inconsistent package versions for a single service in a deployment.  For example, you could deploy a solution with 5 instances of wordpress 3.5.1 running, we update the archive to wordpress 3.6, then you decide to add 3 more units, thus giving you a wordpress service of mixed versions….this is bad.  So how do we solve this?  It’s actually not that hard.  First, we would need to ensure that Juju never automatically adds units to an existing service if there’s a mismatch in the version of binaries between the currently deployed instances and the new ones about to be deployed.  If Juju detected the binary inconsistency, it would need to return an error, optionally asking the user if he/she wanted it to upgrade the currently running instances to match the new binary versions.  We could also add some sort of –I-know-what-I-am-doing option to give the freedom to those users who don’t care about having version mismatches.  Secondly, we should ensure an existing deployment can always grow itself without requiring a service upgrade.   My current thinking around this is that we’d create a package caching charm, that can be deployed against any existing Juju deployment.  The idea is much like squid-deb-proxy (accept the cache never expires or renews), where the caching instance acts as the archive mirror for the other instances in the deployment, providing the same cached packages deployed in that given solution.  The package cache should be ran in a separate instance with persistent storage, so that even if the service completely goes down, it can be restored with the same packages in the cache.

 

So…Can Ubuntu Server Roll?

spacer

I honestly think we can and should consider it, but I’d also like to hear the concerns of folks who think we shouldn’t.


spacer spacer
on February 15, 2013 04:59 AM
spacer

More subunit needs

Robert Collins

Of course, as happens sadly often, the scope creeps..

Additional pain points

Zope’s test runner runs things that are not tests, but which users want to know about – ‘layers’. At the moment these are reported as individual tests, but this is problematic in a couple of ways. Firstly, the same ‘test’ runs on multiple backend runners, so timing and stats get more complex. Secondly, if a layer fails to setup or teardown, tools like testrepository that have watched the stream will think a test failed, and on the next run try to explicitly run that ‘test’ – but that test doesn’t really exist, so it won’t run [unless an actual test that needs the layer is being run].

Openstack uses python coverage to gather coverage statistics during test runs. Each worker running tests needs to gather and return such statistics. The current subunit protocol has no space to hand this around, without it pretending to be a test [see a pattern here?]. And that has the same negative side effect – test runners like testrepository will try to run that ‘test’. While testrepository doesn’t want to know about coverage itself, it would be nice to be able to pass everything around and have a local hook handle the aggregation of that data.

The way TAP is reflected into subunit today is to mangle each tap ‘test’ into a subunit ‘test’, but for full benefits subunit tests have a higher bar – they are individually addressable and runnable. So a TAP test script is much more equivalent to a subunit test. A similar concept is landing in Python’s unittest soon – ‘subtests’ – which will give very lightweight additional assertions within a larger test concept. Many C test runners that emit individual tests as simple assertions have this property as well – there may be 5 or 10 executables each with dozens of assertions, but only the executables are individually addressable – there is no way to run just one assertion from an executable as a ‘test’. It would be nice to avoid the friction that currently exists when dealing with that situation.

Minimum requirements to support these

Layers can be supported via timestamped stdout output, or fake tests. Neither is compelling, as the former requires special casing in subunit processors to data mine it, and the latter confuses test runners.  A way to record something that is structured like a test (has an id – the layer, an outcome – in progress / ok / failed, and attachment data for showing failure details) but isn’t a test would allow the data to flow around without causing confusion in the system.

TAP support could change to just show the entire output as progress on one test and then fail or not at the end. This would result in a cognitive mismatch for folk from the TAP world, as TAP runners report each assertion as a ‘test’, and this would be hidden from subunit. Having a way to record something that is associated with an actual test, and has a name, status, attachment content for the TAP comments field – that would let subunit processors report both the addressable tests (each TAP script) and the individual items, but know that only the overall scripts are runnable.

Python subtests could use a unique test for each subtest, but that has the same issue has layers. Python will ensure a top level test errors if a subtest errors, so strictly speaking we probably don’t need an associated-with concept, but we do need to be able to say that a test-like thing happened that isn’t actually addressable.

Coverage information could be about a single test, or even a subtest, or it could be about the entire work undertaken by the test process. I don’t think we need a single standardised format for Coverage data (though that might be an excellent project for someone to undertake).  It is also possible to overthink things spacer . We have the idea of arbitrary attachments for tests. Perhaps arbitrary attachments outside of test scope would be better than specifying stdout/stderr as specific things. On the other hand stdout and stderr are well known things.

Proposal version 2

A packetised length prefixed binary protocol, with each packet containing a small signature, length, routing code, a binary timestamp in UTC, a set of UTF8 tags (active only, no negative tags), a content tag – one of (estimate + number, stdin, stdout, stderr, file, test), test-id, runnable, test-status (one of exists/inprogress/xfail/xsuccess/success/fail/skip), an attachment name, mime type, a last-block marker and a block of bytes.

The std/stdout/stderr content tags are gone, replaced with file. The names stdin,stdout,stderr can be placed in the attachment name field to signal those well known files, and any other files that the test process wants to hand over can be simply embedded. Processors that don’t expect them can just pass them on.

Runnable is a boolean, indicating whether this packet is describing a test that can be executed deliberately (vs an individual TAP assertion, Python sub-test etc). This permits describing things like zope layers which are top level test-like things (they start, stop and can error) though they cannot be run.. and it doesn’t explicitly model the setup/teardown aspect that they have. Should we do that?

Testid is for identifying tests. With the runnable flag to indicate whether a test really is a test, subtests can just be namespaced by the generator – reporters can choose whether to be naive and report every ‘test’, or whether to use simple string prefix-with-non-character-seperator to infer child elements.

Impact on Python API

If we change the API to:

class TestInfo(object):
    id = unicode
    status = ('exists', 'inprogress', 'xfail', 'xsuccess', 'success', 'fail', 'error', 'skip')
    runnable = boolean

class StreamingResult(object):
    def startTestRun(self):
        pass
    def stopTestRun(self):
        passs
    def estimate(self, count, route_code=None, timestamp=None):
        pass
    def file(self, name, bytes, eof=False, mime=None, test_info=None, route_code=None, timestamp=None):
        """Inform the result about the contents of an attachment."""
    def status(self, test_info, route_code=None, timestamp=None):
        """Inform the result about a test status with no attached data."""

This would permit the full semantics of a subunit stream to be represented I think, while being a narrow interface that should be easy to implement.

Please provide feedback! I’ll probably start implementing this soon.


spacer spacer
on February 15, 2013 02:08 AM

February 14, 2013

spacer

12.04.2 Released!

Mythbuntu

Mythbuntu 12.04.2 has been released. This is a point release on our 12.04 LTS release. If you are already on 12.04, you can get these same updates via the normal update process.

Highlights

  • MythTV 0.25 (2:0.25.2+fixes.20120802.46cab93-0ubuntu1)
  • Starting with 12.04, the Mythbuntu team will only be doing LTS releases. See this page for more info.
  • Enable MythTV and Mythbuntu Updates repositories directly from the Mythbuntu Control Centre without needing to install the mythbuntu-repos package
  • This is the first release with the LTS HW enablement stack.  It will support newer hardware than the old Mythbuntu 12.04 release.  For more information see https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes/UbuntuDesktop#LTS_Hardware_Enablement_Stack

Underlying system

  • Underlying Ubuntu updates are found here
  • Updated system requirements

MythTV

  • Recent snapshot of the MythTV 0.25 release is included (see 0.25 Release Notes)
  • Mythbuntu theme fixes

For more detailed feature information please visit us on launchpad.

We appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 12.04 or precise), or in #ubuntu-mythtv. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (bugs.launchpad.net/mythbuntu/12.04/).

Known issues

  • If upgrading and you have mythstream installed. Please remove mythstream before upgrading as mythstream is no longer supported.
  • If you have used Jamu in the past, you should run "mythmetadatalookup --refresh-all"
  • If you are upgrading and want to use the HTTP Live Streaming you need to create a Streaming storage group

The ISO is available here
on February 14, 2013 10:37 PM
spacer

Edubuntu 12.04.2 Release Announcement

Edubuntu

Edubuntu Long-Term Support

Edubuntu 12.04.2 LTS is the second Long Term Support (LTS) version of Edubuntu as part of Edubuntu 12.04's 5 years support cycle.

Edubuntu's Second LTS Point Release

The Edubuntu team is proud to announce the release of Edubuntu 12.04.2. This is the second of four LTS point releases for this LTS lifecycle. The point release includes all the bug fixes and improvements that have been applied to Edubuntu 12.04 LTS since it has been released. It also includes updated hardware support and installer fixes. If you have an Edubuntu 12.04 LTS system and have applied all the available updates, then your system will already be on 12.04.2 LTS and there is no need to re-install. For new installations, installing from the updated media is recommended since it will be installable on more systems than before and will require drastically less updates than installing from the original 12.04 LTS media.

This release is the first to ship with the backported kernel and X stack. This should be mostly relevant to users of very recent hardware. Current users of Edubuntu 12.04 won't be automatically updated to this backported stack, you can however manually install the packages if you want them.

  • Information on where to download the Edubuntu 12.04.2 LTS media is available from the Downloads page.
  • We do not ship free Edubuntu discs at this time, however, there are 3rd party distributors available who ship discs at reasonable prices listed on the Edubuntu Martketplace

Although Edubuntu 10.04 systems will ask for upgrade to 12.04.2, it's not an officialy supported upgrade path. Testing however indicated that this usually works if you're ready to make some minor adjustments afterwards.

To ensure that the Edubuntu 12.04 LTS series will continue to work on the latest hardware as well as keeping quality high right out of the box, we will release another 2 point releases before the next long term support release is made available in 2014. More information is available on the release schedule page on the Ubuntu wiki.

The release notes are available from the Ubuntu Wiki.

Thanks for your support and interest in Edubuntu!

on February 14, 2013 09:18 PM
spacer

13.04 (Raring Ringtail) Alpha 2 Released

The Fridge

Welcome to the Raring Ringtail Alpha 2 release, which will in time become the 13.04 release.

This alpha features images for Kubuntu and Ubuntu Cloud.

At the end of the 12.10 development cycle, the Ubuntu flavour decided that it would reduce the number of milestone images going forward and the focus would concentrate on daily quality and fortnightly testing rounds known as cadence testing. Based on that change, the Ubuntu product itself will not have an Alpha 2 release. Its first milestone release will be the FinalBetaRelease on the 28th of March 2013. Other Ubuntu flavours have the option to release using the usual milestone schedule.

Pre-releases of Raring Ringtail are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu developers and those who want to help in testing, reporting, and fixing bugs as we work towards getting
this release ready.

Alpha 2 is the second in a series of milestone images that will be released throughout the Raring development cycle, in addition to our daily development images. The Alpha images are known to be reasonably free of showstopper CD build or installer bugs, while representing a very recent snapshot of Raring. You can download them here:

  • cdimage.ubuntu.com/kubuntu/releases/raring/alpha-2/ (Kubuntu)
  • cloud-images.ubuntu.com/releases/raring/alpha-2/ (Ubuntu Server Cloud)

Alpha 2 includes a number of software updates that are ready for wider testing. This is an early set of images, so you should expect some bugs. For a more detailed description of the changes in the Alpha 2 release and the known bugs (which can save you the effort of reporting a duplicate bug, or help you find proven workarounds), please see:

www.ubuntu.com/testing/

If you’re interested in following the changes as we further develop Raring, we suggest that you subscribe initially to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, alpha releases, and other interesting events.

lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce

Originally posted to the ubuntu-devel-announce mailing list on Thu Feb 14 18:35:05 UTC 2013 by Jonathan Riddell

on February 14, 2013 08:16 PM
spacer

Ubuntu 12.04.2 LTS released

The Fridge

The Ubuntu team is pleased to announce the release of Ubuntu 12.04.2 LTS (Long-Term Support) for its Desktop, Server, Cloud, and Core products, as well as other flavours of Ubuntu with long-term support.

To help support a broader range of hardware, the 12.04.2 release adds an updated kernel and X stack for new installations on x86 architectures, and matches the ability of 12.10 to install on systems using UEFI firmware with Secure Boot enabled.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 12.04 LTS.

Kubuntu 12.04.2 LTS, Edubuntu 12.04.2 LTS, Xubuntu 12.04.2 LTS, Mythbuntu 12.04.2 LTS, and Ubuntu Studio 12.04.2 LTS are also now available. For some of these, more details can be found in their announcements:

  • Kubuntu: www.kubuntu.org/news/12.04.2-release
  • Edubuntu: www.edubuntu.org/news/12.04.2-release
  • Mythbuntu: www.mythbuntu.org/home/news/12042released
  • Ubuntu Studio: ubuntustudio.org/2013/02/ubuntu-studio-12-04-2-lts-precise-pangolin-release-notes/

To get Ubuntu 12.04.2

In order to download Ubuntu 12.04.2, visit:

www.ubuntu.com/download

Users of Ubuntu 10.04 and 11.10 will be offered an automatic u

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.