spacer Follow us on Twitter!
back to panic.com

Panic Blog

From the desk of
Cabel
Engineering Dept.

The Lightning Digital AV Adapter Surprise

March 1st, 2013

We’ve been doing significant testing lately with video out using various iOS devices for an upcoming project. In doing so, we waded right in the middle of a strange video out mystery. It’s time to unravel that mystery. (Chung-chung!)

Mystery #1: 1600 × 900 Resolution, Tops

When we turn on “Video Mirroring” to send out an image through the Lightning AV Adapter, the system tells us that the maximum and optimum resolution we can do is 1600 × 900:

spacer

“Hang on, that’s not 1080p!”, you’re saying to yourself. That’s exactly what we said!

When we plug in the old Dock Connector AV Adapter, the system gives us the 1920 x 1080:

spacer

So that’s a bummer. Questionably, Apple’s iPad mini tech specs claim “up to 1080p” video out support, but we can’t figure out how that’s possible. Maybe they mean that the adapter upscales the 1600 × 900 image to 1080p?

Mystery #2: MPEG Artifacts

When you plug a device into a television, you expect a clean, crisp signal — a mirror of what you see on the screen. Right?

But not with the Lightning Digital AV Adapter:

spacer

Not exactly the cleanest text in the universe! Look at all that edge garbage. (We don’t get these artifacts with the old AV adapter.)

Theory

We thought we were going insane. This is just an AV adapter! Why are these things happening! Limited resolution. Lag. MPEG artifacts. Hang on, these are the same things we experience when we stream video from an iOS device to an Apple TV…

You got it. After some good Twitter leads, and a little digging, we had our theory:

Is the Lightning Digital AV Adapter basically a small AirPlay-like receiver?

I don’t mean AirPlay the network protocol, but rather AirPlay the video compression system. It must somehow set up a connection with the very iOS device it’s plugged into. It’s in no way passing raw HDMI out from the device, but rather presenting said stream upscaled to 1080p.

“But wait”, you might be saying. “You mean to tell me there’s enough electronics in that tiny plug to support AirPlay streaming and decoding?”

It seems unlikely, doesn’t it? So out came the hacksaw.

spacer

You would not believe how incredibly tiny those components are on the left. Smaller than anything we’ve seen, electronics-wise. What could all of those resistors be for?

Let’s flip it over:

spacer

Your eyes don’t deceive you — that tiny chip says ARM. And the H9TKNNN2GD part number on there points towards RAM — 2Gb worth.

In short: it appears the Lightning Digital AV Adapter has a SoC CPU. 

So, AirPlay (or AirPlay-like MPEG streaming) makes a lot more sense now.

Conclusion

There’s a lot more going on in this adapter than we expected: indeed, we think the Lightning Digital AV Adapter outputs video by using AirPlay (or similar MPEG streaming). Are we off base? Let us know!

There are a lot of questions. What OS does it boot? @jmreid thinks the adapter copies over a “mini iOS” (!) from the device and boots it in a few seconds every time it’s connected, which would explain the fairly lengthy startup time for video out. Why do this crazy thing at all? All we can figure is that the small number of Lightning pins prevented them from doing raw HDMI period, and the elegance of the adapter trumped the need for traditional video out, so someone had to think seriously out of the box. Or maybe they want get as much functionality out of the iPad as possible to reduce cost and complexity.

The bad news? By streaming internally, the quality is poor, and it’s not 1080p. We long for raw, untouched HDMI-out.

The good news? If someone complains that this insignificant plug costs $50, tell them it’s a tiny computer!

UPDATE 3/2: This anonymous comment — if you believe it — confirms nearly all of our theories and adds much-needed backstory. Very interesting! Thanks, whoever you are. Our nerd-brains appreciate it.

PS: If you’re wondering why we’re obsessed with clean iOS video out, we’ll post some status on that soon!
Posted at 3:57 pm 136 Comments

From the desk of Cabel
Portland, Oregon 97205

Coda and Sandboxing

December 12th, 2012

Before we can add new features to Coda 2 in the Mac App Store, we must first “Sandbox” it — adhere to a set of Apple guidelines aimed at increasing the security of Mac OS X.

What does this mean, really?

Well, for safety, sandboxing limits an app’s access to your local files until you give the app explicit permission to interact with those files. And once you’ve done this, your permission is remembered in the future. In other words, Coda won’t be able to see most of your local folders until you specifically select them in a traditional “Choose” dialog. The good news? Coda has Sites, and Sites have a Local Path, and once you “Choose” the Local Path when setting up your site, you’ll be able to view that folder and interact with it in the future. The bad news? You’ve got to reset all of your Local Paths, and if you don’t use Sites in Coda (which would be a bit weird) there will be brief bumps.

These changes should only affect the Mac App Store version. And we think most users won’t even notice that anything has changed.

Here’s the full list of what will change, slated for a future Coda release:

1 Local Root

Your site’s “Local Root” will have to be reset. You’ll be prompted to do this the first time you try to connect.

spacer You only have to do this once for each of your sites!

2 Go To Folder

It will no longer be possible to “Go To” any local path by typing it in. “Go To Folder” on a Local path will now bring down a traditional “Choose” panel.

spacer 3 Path History

In the Sidebar and the Files browser, the “Path” pop-up can no longer show anything above your defined Local Root. To go above your Local Root, you’ll have to use Choose.

If you’re not working in a Site, you will land in a generic sandboxed home directory, and must Choose another folder to continue.

spacer You only need to “Choose” a folder once!

spacer 4 Path Bar Browsers

If you click on a folder outside of your Local Root, you have to manually choose the folder via Choose panel.

spacer You only need to “Choose” a folder once!

5 Saving Files

It’s no longer possible to Save files you don’t have write access to, and Coda is no longer able to offer an authorization dialog to permit this behavior.

This includes any files you don’t own and don’t have proper permissions to write, such as files owned by a “web” process.

spacer This is also an App Store restriction.

6 Get Info

It’s no longer possible to change permissions of files that require Administrator/Root access from Coda’s Get Info window.

You’ll have to switch to the Finder and adjust permissions there before editing these items.

spacer This is also an App Store restriction.

7 Places

Any Local places will be cleared during the upgrade, and will need to be recreated, once.

Note: Places are defined per computer, so they will need to be reset on each computer Coda is used on.

8 SVN and GIT

Tool paths may need to be reset depending on their location on your computer.

9 Local Shell

Coda will no longer be able to open a direct local shell/terminal. (You could always turn on Remote Login in Sharing preferences, and connect through that.)

That’s it. What do you think?

For the truly curious we’ve put together a special Coda 2 build with these changes.

Experimental

If you wish to try Coda Sandboxing Test, it’s critical you understand this build is experimental and beta-quality. You must back up your system first.

Also, you must be currently using Coda 2.0.6 or higher. And if you’re using the Mac App Store + iCloud version of Coda 2, you must first turn off iCloud Sync in your current Coda, before launching this build.

Got that? Download the build here. (50 MB .zip)

We don’t have a timeline on this release, but we’re curious to know your general thoughts on Coda 2 and Sandboxing. Once again, we do not think these changes will affect most people, but we’d love it if you could please take this survey:

Thanks for reading, and thanks for using Coda 2. We’re excited to finish sandboxing and start work on more new, awesome things!

Posted at 1:11 pm 8 Comments

From the desk of
Wade
Engineering Dept.

iTunes 11 and Colors

December 11th, 2012

iTunes 11 is a radical departure from previous versions and nothing illustrates this more than the new album display mode. The headlining feature of this display is the new view style that visually matches the track listing to the album’s cover art. The result is an attractive display of textual information that seamlessly integrates with the album’s artwork.

After using iTunes for a day I wondered just how hard it would be to mimic this functionality — use a source image to create a themed image/text display.

The first step in replicating iTunes theming is obvious: getting the background color used for the track listing. This seemed easy enough, just use simple color frequency to determine the most prevalent color along the left hand side of the artwork. Doing a simple color count gives pretty good results, but looking at iTunes it was clear there was more to it than just that. I proceeded to add a bit of logic to add preference for colored backgrounds instead of just using black and white when those were the most prevalent colors. Doing this presents more interesting styles since seeing only black and white backgrounds would be a bit boring. Of course you don’t want to replace black or white if those colors really are dominant, so I made sure that the fallback color was at least 30% as common as the default black or white.

Once I started filtering black and white backgrounds my results started to get a bit closer to iTunes. After doing some more analysis I saw that iTunes also looks for borders around the artwork. So lets say you have a solid white border around the artwork picture, iTunes will remove the border and base its theming colors off the remaining interior content. I didn’t add this functionality as it was outside the scope of my simple demo application.

After the background color was determined, the next step is to find contrasting text colors. Again, the first thing I tried was simple color counting, this provides surprisingly good results but iTunes does better. If we relied only on color frequency you’d get variants of the same color for the different types of text (EG: primary, secondary, detail). So the next thing I did to improve the results were to make sure the text colors were distinct enough from each other to be considered a separate color. At this point things were really starting to look good. But what other aspects would need to be considered to ensure the text always looked good on the chosen background color? To ensure colorful text I also added a bit of code to make sure the color used for the text had a minimum saturation level. This prevents washed out colors or very light pastel colors from being used that might not give the best appearance. Now that the text had unique colors that looked good with the background, the only remaining problem was that the resulting text colors could end up lacking enough contrast with the background to be readable. So the last thing I added was a check to make sure any text color would provide enough contrast with the background to be readable. Unfortunately this requirement does cause a rare “miss” when finding text colors which then cause the default black/white colors to be used.

The end result looks something like this:

spacer spacer spacer spacer

It’s not 100% identical to iTunes — sometimes it’s better! Sometimes just different — but it works pretty well overall.

You can see exactly what I did in the following Xcode demo project:


ColorArt.zip

40 KB. Mac OS X 10.7+

A few notes about this demo. I did very basic frequency filtering to prevent random colors from appearing as text colors. In my case I chose to ignore colors that only appear once. This threshold should be based on your input image size since smaller images won’t have as many pixels to sample from. Another processing technique that iTunes does, that I would also do if this were shipping code, is to look for compression fringing around the edges of the image. I’ve noticed a few cover art images that contain a single pixel edge of white/gray fringe that should be ignored and removed before sampling for the colors.

(Last but not least, this code was written in a few hours, and is very rough. So just in case you have thoughts about speed or optimizations, please note it was more of a thought exercise than a lesson in algorithm design. Engineer disclaimer complete.)

That being said, I hope this is somewhat interesting! It shows that with just a bit of work you too can have fancy themed designs too.

UPDATE: Thanks to Aaron Brethorst, this code is also now on GitHub.

Posted at 10:55 am 60 Comments

Coda 2.0.7 Beta 1, Cabel

December 4th, 2012

It’s minor, but we thought our deepest Coda fans could give Coda 2.0.7 a whirl.

If you’re interested, grab Coda 2.0.7b1 here (51MB).

UPDATE 12/10: The beta has ended. The app has been released for direct customers and submitted to Apple.

Notable changes: improved stability and syntax highlighting performance.

If you find issues, promptly report them thoroughly via Hive!

PS: We also recently solicited, via Twitter, testers for Transmit w/iCloud and Dropbox Favorites Sync (coming soon!), and a new Panic iPad app that’s all about Status. You should follow us!

Posted at 3:17 pm 5 Comments

From the desk of
logan
Engineering Dept.

Fun with Face Detection

November 21st, 2012

spacer Let’s face it (sorry): face detection is cool. It was a big deal when iPhoto added Faces support — the ability to automatically tag your photos with the names of your friends and family adds a personal touch. And Photo Booth and iChat gained some awesome new effects in OS X Lion that can automatically track faces in the frame to add spinning birds and lovestruck hearts and so on. While not always productively useful, face detection is a fun technique.

I’ve seen attempts at duplicating Apple’s face detection technology. (Apple is far from the first company to do it.) There are libraries on GitHub and various blog posts for doing so. But recently I realized that Apple added support for face detection in OS X Lion and iOS 5. It seemed to slip under my radar of new shiny things. Developers now have a direct link to this powerful technology on both platforms right out of the proverbial box.

Using Face Detection through Core Image

Apple’s face detection is exposed through Core Image, the super-useful image manipulation library. Two classes are important: CIDetector and CIFeature (along with its subclass, CIFaceFeature). With a little experimenting one night, I was able to get a sample app detecting faces within a static image in about 10 lines of code:

  1. // Create the image
  2. CIImage *image = [CIImage imageWithContentsOfURL:[NSURL fileURLWithPath:@"Photo.jpg"]];
  3.  
  4. // Create the face detector
  5. NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy, nil];
  6.  
  7. CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:options];
  8.  
  9. // Detect the faces
  10. NSArray *faces = [faceDetector featuresInImage:image];
  11.  
  12. NSLog(@"%@", faces);

Note the dictionary of options. There is only one particularly useful key: CIDetectorAccuracy. It has two possible values: CIDetectorAccuracyLow and CIDetectorAccuracyHigh. The only difference: There seems to be additional processing performed on the image in order to detect faces, but at the cost of higher CPU usage and lower performance.

In cases where you are only apply detection to a single static image, high accuracy is best. Low accuracy becomes handy when manipulating many images at once, or applying the detector to a live video stream. You see about a 2-4x improvement in render time with low accuracy, but face tracking might pick up a couple of false-positives in the background once in a while, or be unable to detect a face at an angle away from the camera as well as high accuracy could.

Now that we have an array of faces, we can find out some information about each face within the image. CIFaceFeature exposes several useful properties to determine the bounding rectangle of the face, as well as the position of each eye and the mouth.

Using these metrics, it’s then possible to draw on top of the image to mark each facial feature. What you get is a futuristic sci-fi face tracker ala the Fifth Element. Leeloo Dallas Multipass, anyone?

  1. // Create an NSImage representation of the image
  2. NSImage *drawImage = [[NSImage alloc] initWithSize:NSMakeSize([image extent].size.width, [image extent].size.height)];
  3. [drawImage addRepresentation:[NSCIImageRep imageRepWithCIImage:image]];
  4.  
  5. [drawImage lockFocus];
  6.  
  7. // Iterate the detected faces
  8. for (CIFaceFeature *face in faces) {
  9. // Get the bounding rectangle of the face
  10. CGRect bounds = face.bounds;
  11.  
  12. [[NSColor colorWithCalibratedWhite:1.0 alpha:1.0] set];
  13. [NSBezierPath strokeRect:NSRectFromCGRect(bounds)];
  14.  
  15. // Get the position of facial features
  16. if (face.hasLeftEyePosition) {
  17. CGPoint leftEyePosition = face.leftEyePosition;
  18.  
  19. [[NSColor colorWithCalibratedWhite:1.0 alpha:1.0] set];
  20. [NSBezierPath strokeRect:NSMakeRect(leftEyePosition.x - 10.0, leftEyePosition.y - 10.0, 20.0, 20.0)];
  21. }
  22.  
  23. if (face.hasRightEyePosition) {
  24. CGPoint rightEyePosition = face.rightEyePosition;
  25.  
  26. [[NSColor colorWithCalibratedWhite:1.0 alpha:1.0] set];
  27. [NSBezierPath strokeRect:NSMakeRect(rightEyePosition.x - 10.0, rightEyePosition.y - 10.0, 20.0, 20.0)];
  28. }
  29.  
  30. if (face.hasMouthPosition) {
  31. CGPoint mouthPosition = face.mouthPosition;
  32.  
  33. [[NSColor colorWithCalibratedWhite:1.0 alpha:1.0] set];
  34. [NSBezierPath strokeRect:NSMakeRect(mouthPosition.x - 10.0, mouthPosition.y - 10.0, 20.0, 20.0)];
  35. }
  36. }
  37.  
  38. [drawImage unlockFocus];

With a little more work, it’s pretty easy to apply this technique to live video from the device’s camera using AVFoundation. As you get back frames from AVFoundation, you perform face detection and modify the frame before it is displayed. But I’ll leave that as an activity for the reader. spacer

And amazingly, it even works with cats.

spacer

With a little more effort, I was able to grab the closest detected face’s region of the image, and do a simple copy-and-paste onto the other detected faces (adjusting for angle and distance, of course). Behold… Panic’s newest, most terrifying cloning technology!

spacer

Here’s a little sample app. Have fun!

FaceTest.zip (64 KB)

Xcode 4.2+, 10.8 SDK, ARC

Posted at 11:25 am 11 Comments

From the desk of Cabel
Portland, Oregon 97205

App Scams

November 20th, 2012

Like Minecraft? Then surely you’ll love Mooncraft!

spacer

Except, well, you really won’t. Really:

What happened here? It’s pretty simple.

1. Scammer makes an extremely simple iOS app and submits it to Apple.

2. Once it’s approved, they change the screenshots, description, and name — things you can edit at any time.  Piggyback off a popular game!

3. Buy hundreds of fake ★★★★★ reviews, somehow.

4. Sit back and relax as you slowly and gently travel towards hell.

This isn’t Apple’s fault, of course — it’s bait-and-switch, the classic inch/mile situation that scammers rely on. How can Apple fix this? Being able to adjust screenshots/descriptions after submitting is important, and we don’t want that to go away. And it’d be unreasonable for Apple to manually review all screenshot changes.

How about this: after an app hits the store, if it has nothing but 1-star reviews (that include text!), and those reviews mention keywords like “scam” a lot, flag it for further inspection?

spacer

I bet there’s an algorithm out there that could find these apps pretty quickly.

Either way, Quang Nguyen (which might be a fake name, of course): you’re a terrible person. (Thanks to Steve for missing the tiny popup button and clicking “Buy App” by accident.)

UPDATE 12/10/2012: For a while, Mooncraft was pulled from the store. But, of course, it’s back.

UPDATE 1/10/2013: Apple has announced a new policy that screenshots can only be updated when they accompany a new application binary submitted for review. Hopefully that will put a stop to this particular type of trickery.

Posted at 11:18 am 26 Comments

From the desk of Cabel
Portland, Oregon 97205

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.