spacer

The Hiltmon

On walkabout in life and technology

Day One iCloud Sync Fail

I use the Day One journalling program a lot, especially to log commits and all social activity using Slogger. But today I noticed that the Journals on all my devices were out of sync, and new entries created were not passed on to other devices. Somehow, sync was jammed up.

I have Day One on

  • My home server where Slogger runs
  • My laptop, where I work
  • My iPad, where I chill
  • My iPhone, where everything needs to be available

And they are all kept in sync via iCloud. Until 2 days ago. Then it stopped.

On investigation, it seems there were 29 files inside the iCloud folder for Day One that were grayed out and displaying the OS X progress bar next to them. From what I could tell, these files were supposed to be downloading from iCloud but did not come in. For 2 days they dod not download. A check on my iOS devices and the same thing was happening, 29 files were in downloading status and never came down.

I tried:

  • Rebooting all devices, even the server, no change
  • Deleting the pending files, and they came back, but remained in downloading state
  • Copying them from another system, and they showed up in Day One, but syncing was still busted
  • I started to see sandboxd: ([1674]) Day One(1674) deny forbidden-link-priv errors in console, coincidentally, 29 times, for these files

Short version, iCloud was saying there were 29 updates, and never finished delivering them. And there seems to be nothing you can do to reset or force these to unstick. And I don’t think there is anything the developer of Day One can do about it!

So, I finally gave up. I disabled iCloud syncing on Day One and enabled Dropbox sync instead. The laptop and server switched over OK (once I manually replaced the in-progress files) and creating an entry in one appears in the other.

On iOS though, things get hairier. These pending updates prevented Day One from accessing its own files, so iCloud sync could not complete disablement. It got to 98% and then just sat there. I had to uninstall the app, turn iCloud Documents and Data sync off in iOS Settings to delete all my iCloud data on the device, then reinstall the app and set it up to use Dropbox instead. Oh, and then re-enable iCloud documents for everything else.

The restores are in progress, and there are the odd Day One crashes as it restores, but the missing entries have already synced.

In short, the black-box nature of iCloud sucks when trying to troubleshoot or fix sync errors. The developer of Day One does not pick sides between iCloud and Dropbox sync. I for one, do recommend you sync your Day One data using Dropbox.

No Sale: It Does Not Have What We Can and Will Not Use

As a buyer of software, I always focus on what the software does do. If it does what I need to do well, and the price is right, I buy it, I use it and I gain the productivity benefits from it.

As a seller of software, I find potential customers focus on what the product does not do, and use this as a reason to avoid purchasing. That’s fine as long as the missing feature is something they can and do or will use. But it makes no sense to me when the missing feature is something that they already have and do not use, don’t know how to use or have no reason to use.

As a result, they remain with a painful process or outdated system, one that does not do what mine does do that they need (and why I’m selling to them in the first place).

For example, with regards to my Kifu product, these two conversations have happened many times:

  • Kifu does not have a report writer module. Instead, it does have a comprehensive set of reports. Several potential customers have stated that without a report writer, they will not buy Kifu. However, their existing software does have a report writer which they have never used and cannot figure out, and it did not come with any reports. They had to pay extra to hire the vendor to create reports for them (all of which Kifu already provides). We did not create a report writer for this exact reason, no-one except programmers can use report writers, and most clients need the same reports! But no sale because no report writer.
  • Kifu also does not have a user accessible query generator to enable users to create their own database queries. Several potential clients stated that the competitor’s product which they are using and wish to replace (which is why they are talking to us) does have this feature, so no sale. But only one out of about twenty actually used the feature or even knew how to find it. The remainder did not purchase because of a feature they themselves admitted they could not use. Go figure.

One could argue that these are both just-in-case type features. But the reality is that these are features for developers, not normal users; and their existence is a sign that the product is not feature complete. A report writer indicates that the vendor does not understand the reporting needs of their clients, a query engine implies the vendor does not understand the information needs of clients. In both cases, though, the vendor gets called in to use these features on behalf of the client because the client cannot. And if they don’t exist, the vendor gets called anyway. So what, really, is the difference?

I was always taught to talk about the benefits of a product, and to be honest about what it does not do. What I don’t understand is the decision to reject a better product because it does not have a feature you can and will never use.

Rant over.

Follow me on App.net as @hiltmon or Twitter @hiltmon and share your war stories.

Python Zen

After reading Federico Viticci’s Automating iOS: How Pythonista Changed My Workflow, I think its time I added the Python Programming Language to my repertoire.

The first thing I learned about Python is the Zen of Python by Tim Peters:

  • Beautiful is better than ugly.
  • Explicit is better than implicit.
  • Simple is better than complex.
  • Complex is better than complicated.
  • Flat is better than nested.
  • Sparse is better than dense.
  • Readability counts.
  • Special cases aren’t special enough to break the rules.
  • Although practicality beats purity.
  • Errors should never pass silently.
  • Unless explicitly silenced.
  • In the face of ambiguity, refuse the temptation to guess.
  • There should be one – and preferably only one – obvious way to do it.
  • Although that way may not be obvious at first unless you’re Dutch.
  • Now is better than never.
  • Although never is often better than right now.
  • If the implementation is hard to explain, it’s a bad idea.
  • If the implementation is easy to explain, it may be a good idea.
  • Namespaces are one honking great idea – let’s do more of those!

I think this really applies to all languages and programming in general.

Google Analytics Logger for Slogger

This article explains how to set up Brett Terpstra’s Slogger with a new plugin to journal your daily Google Analytics site stats. This plugin supports:

  • Multiple web properties as long as they are under the same Google login.
  • Multiple dates so you can skip a few Slogger run days and catch up (or back fill).
  • Only logs a full day’s worth of stats, so if it runs now, it logs up to yesterday’s stats.
  • Captures page views, visitors, top 5 sources and top 10 popular pages. If you want different stats, let me know in the comments or via App.net @hiltmon or Twitter @hiltmon.

Warning: This plugin is alpha code, so assume the usual no warranty legalize, basically proceed at your own peril.

Note also that this plugin is only two days old as this gets posted, whereas the access token lasts 2 weeks, so the OAuth 2.0 renewal code has not yet been tested.

Finally, I tend to try to make installation instructions as explicit as possible, so please bear with me as there are quite a few steps here.

Installing the Plugin

Follow these to install and configure the plugin. If any steps are unclear, check out the detailed instructions below.

Quick Install Instructions

  • Install the google-api-client gem
  • Save Gist 4072068 as plugins/googleanalyticsclient.rb
  • Patch slogger using Gist 4072079
  • Run /slogger -o Google to create the slogger_config entry
  • Add your web properties UA codes and paste in the client_id and secret
1
2
  client_id: "237632137636.apps.googleusercontent.com"
  client_secret: "xUgp_NqKHnyJ_b8cwZbR1tnX"
  • Run /slogger -o Google again to launch a browser, authenticate and provide an auth_code. Paste that into slogger_config under auth_code.
  • Run /slogger -o Google a third time to get an access_token and create the first entries

Detailed Installation Instructions

If you are here, I assume you already have Slogger installed and running. As of writing this, I am on version 2.14.2.

Open a terminal and cd to your slogger folder (in my case that’s in ~/Scripts/Slogger. Run all commands from there.

Install the Google API Gem

In terminal, if you use RVM:

1
gem install google-api-client

If you are running the system ruby, you need to sudo it instead. You can tell if you are running the system ruby by running which ruby and if the answer is /usr/bin/ruby, it’s the system one.

1
sudo gem install google-api-client

Either way, you should see:

1
2
Installing ri documentation for google-api-client-0.5.0...
Installing RDoc documentation for google-api-client-0.5.0...

Note that this is a pre-release gem, but it’s close to final.

Install the Plugin

Download and extract the googleanalyticslogger.rb plugin file from Gist 4072068. Then move the googleanalyticslogger.rb file to your Slogger plugins folder.

Or you can also just create a new googleanalyticslogger.rb in your plugins folder and paste the raw gist code in.

Patch Slogger

Note: This is critical, the plugin will not work without this patch.

Open slogger in your favorite programmer’s editor and go to line 172, you should see:

1
eval(plugin['class']).new.do_log

Replace it with:

1
2
3
4
5
6
7
if plugin['updates_config'] == true
  # Pass a reference to config for mutation
  eval(plugin['class']).new.do_log(@config)
else
  # Usual thing (so that we don't break other plugins)
  eval(plugin['class']).new.do_log
end

spacer

An explanation for this patch is in the “How it Works” section below.

Save and close slogger

Optionally: Create your own Google API Client keys

You may skip this step in the process and use the Google API Client codes that I already set up. I’ve not been able to test this on anything but my own account so please let me know if this works.

Just in case, I have included instructions on how to create your own as an appendix to this post.

Create the slogger_config file entry for this plugin

As for all plugins, the first thing you need to do is run Slogger to create the slogger_config for it. The -o Google parameter forces Slogger to only run this plugin (and not run all your other plugins and create duplicate Day One entries):

1
./slogger -o Google

You should see:

1
2
Initializing Slogger v2.0 (2.0.14.2)...
> 11:00:45 GoogleAnalyticsLogger: Google Analytics has not been configured or a feed is invalid, please edit your slogger_config file.

Add the Client ID and Secret

Open slogger_config in your favorite text editor and scroll down to the GoogleAnalyticsLogger section.

Paste in my client ID and secret key (or use your own)

1
2
  client_id: "237632137636.apps.googleusercontent.com"
  client_secret: "xUgp_NqKHnyJ_b8cwZbR1tnX"

spacer

Make sure you save and close slogger_config before moving on to the next step.

Acquire the Authentication Code

This is the painful part of OAuth 2.0, you need to authorize this application to access your data. To do so, just run Slogger again.

1
./slogger -o Google

Slogger will open your default browser and request authorization to access your data.

spacer

Click Allow Access. It will come back with an one-time Authorization Code.

spacer

Copy the code and paste it into your slogger_config in the auth_code field.

spacer

Save and close the file again.

Acquire the Access Token

Once more, run

1
./slogger -o Google

And you should see (Look for “Getting access token”)

1
2
3
4
Initializing Slogger v2.0 (2.0.14.2)...
  11:13:34 GoogleAnalyticsLogger: Logging Google Analytics posts
  11:13:34 GoogleAnalyticsLogger: Run for 2012-11-13 - 2012-11-13
  11:13:34 GoogleAnalyticsLogger: Getting access Token...

If you look in your slogger_config now, you should now see an access_token and a refresh_token.

spacer

If you do not, check that the slogger patch has been saved, set the auth_code to "" and try again.

Add web properties

To add web properties, go to your Google Analytics home page and sign in.

Click on “All Accounts” at the top-left, then expand the first account.

spacer

Add the UA codes for each property you want to log to the properties list in slogger_config. These are the same codes you use in your site to send stats to Google Analytics.

spacer

Test that it works

One more time, dear friends:

1
./slogger -o Google

And you should see (Note that I ran this on the 14th for two of my properties)

1
2
3
4
5
6
7
Initializing Slogger v2.0 (2.0.14.2)...
  11:20:09 GoogleAnalyticsLogger: Logging Google Analytics posts
  11:20:09 GoogleAnalyticsLogger: Run for 2012-11-13 - 2012-11-13
  11:20:10 GoogleAnalyticsLogger: - Getting Site Stats for www.hiltmon.com...
  11:20:11               DayOne: =====[ Saving entry to entries/2B435D2409FA4A729A3DC4C4D4734E07 ]
  11:20:11 GoogleAnalyticsLogger: - Getting Site Stats for www.noverse.com...
  11:20:12               DayOne: =====[ Saving entry to entries/6A30ED73AFD9454197818B1E87C28B10 ]

And in DayOne:

spacer

To undo the test, just run

1
./slogger -o Google -u 1

And you’re done

This plugin should now run every time your scheduled Slogger run occurs.

How it works

This plugin uses the Google Analytics API to retrieve stats for web properties using OAuth 2.0 security.

Oauth 2.0: Google Style

The first thing you need to do is create a Google API Client registration at Google (See the Appendix below on how to do this). The most important thing is to tell Google that this is an Installed Application. That way, Google will generate a refresh_token that can be used to enable the application to refresh its own access when the regular access_token expires.

Even though its an installed application, the first time around, Google OAuth 2.0 requires a user sitting in front of a browser. So I setup the plugin to help with this process.

If the client_id is not set, the plugin assumes this is the first run, pops a warning and does nothing.

If the client_id is set, it checks the auth_code. If the auth_code is not set, this must be the second run. The plugin creates an OAuth 2.0 authentication URL and launches the user’s browser. The URL is configured such that the resulting auth_code is visible to the user and can be copied and pasted. At some point, I could possibly write code to monitor the browser and get the auth_code but thats too much for now.

Note that the auth_code is a single use code, once it has been used once, it’s useless. We need to convert that to a longer term token. So, if the client_id is set and there is an auth_code, check the access_token. If that is blank, get a new one. Since this is assumed to be the first time, we know that Google also returns the refresh_token. These are saved to the config file (see mutable config below).

It then checks the access_token to see if it has expired. If so, asks for a new one. This code has not yet been tested, and will probably fail. Two days only!

Mutable Config

The default Slogger plugin gets a ruby class level copy of the main Slogger config data structure. The problem is, I needed to able to save the access_token and refresh_token as and when they change without user involvement. If you change the copy, it does not change the original, and when Slogger finishes its run and saves the updated config, these changes will be lost.

I did look at creating a client_secrets.json file as per the gem documentation, but I feel that having more than one configuration file for Slogger was not a good idea.

So instead, I needed access to the original config data structure, not the class copy. Hence the patch. Now, slogger looks for an updated_config attribute in the registration, and if it is true, passes the config to the plugin directly, else runs all other plugins as usual. This plugin sets 'updates_config' => true in its registration.

I’m uncertain whether this is a good or right way to go, but it works for now in the alpha. Note that any updates to Slogger will trash this patch, which means a two-file approach may be better.

API Discovery

Once the OAuth 2.0 is done, the plugin “discovers” the Google Analytics API. This is needed to access it.

Property Caching

The plugin then uses the Analytics Management API to download and cache a set of all the web properties accessible to this account. If anything went wrong in OAuth 2.0, we’ll find it out here.

Dates

The Google Analytics API does not seem to make timestamps available. It does need a start_date and end_date to get data. If you just use these, though, the API sums all the data between the two dates and returns it as one row. Fortunately, it does have a date dimension that can be used.

Since I want the journal in Day One to have the full set of stats for a date, I setup the plugin to operate up until yesterday, and to do nothing if the last run was after yesterday. That way, you should never see a journal with partial day stats. But you can back fill if you want.

Properties

The plugin then runs for each web site that you have a Google Analytics UA code. It uses the cached properties list to convert that into an internal Google site code and get the site name (used in the journal header). If it cannot match the UA code to an entry in the cache, it does nothing.

The Process

For each matched site code, it runs the queries. In the alpha, I have these nice and separate for testing, but they can be batched later on.

Since the date is a dimension field, it means that the Google Analytics API returns a row for each date and each other dimension. For example, in visitors, it returns a row for new visitors and another row for returning visitors for the same date. Which means that I need to take the data returned and consolidate the data by date.

I created a content hash that is date keyed, and add an array of markdown formatted strings to each date in order that I’d like the journal to appear. It’s simple, and it works.

Once all the API queries are done, I loop through the content dates, grab each array of strings, concatenate them into a body and use Slogger to create a new Day One entry for that date as of 11:59PM.

Feel free to look at the code and let me know what you think. I try to make my early code more explicit to aid debugging, and plan to return later to optimize and idiomize the code.

Appendix: Optionally create your own Google API Client keys

So just in case, here’s how to create your own Google API client keys:

  • Go to the Google API Console and login with the account that you use for Analytics.
  • Create a New API Application
  • Enable access to the Google Analytics API, and agree to the EULA if necessary.
  • Click on API Access to create you OAuth2 keys
  • Click on Create an OAuth 2.0 Client ID
  • Give it a name and click next
  • Choose Installed Application. This is critical or the keys will not work.
  • Click Create Client ID

You should now see a Client ID and Secret for Installed Applications. Copy and paste these into your slogger_config.

As always, feel free to comment below on this post, or follow me on App.Net as @hiltmon or Twitter as @hiltmon.

Enjoy.

Another Reason to Love This Community

Friendly competition, another reason to love this community.

The gang at App.Net just released their new integrated stream marker feature.

The first response I see (at the top) is from Manton Reece (TW: @manton, ADN: @manton), the maker of Tweet Marker that competes with this feature, congratulating them on their implementation.

spacer

Screenshot off App.Net using Wedge.

Linking to Bullshit

While I was working on my Adverse Apple Articles post this morning, Marco Arment was writing this: Linking to bullshit:

If you truly dislike bullshit writing and don’t want to support it, hit the publishers where it hurts: don’t read it, and don’t link to it.

I only partially agree. If the site and the article is truly link bait, then sure, we should ignore it and not send others to it.

But if the article is being read by a lot of people and is intentionally misinforming them, I think we should call them out and correct it. Otherwise the spin and lies will propagate.

Adverse Apple Articles

As part of my research into AAPL vs AMZN Performance Madness, I noticed a lot of negative press on AAPL’s share price. So I decided to wait until after the election, then sample the press again to see whether it was a blip or a trend.

In short, the evidence points to a concerted effort to create a negative impression of Apple and push the stock price down. Lets take a look, shall we.

Read on →

Multiple Themes in Sublime Text 2

One thing I like to do is to have different themes for different file types in my text editor. That way, at a glance, I can guess what kind of file a text-filled window contains, especially when zoomed out using Mission Control. I’ve been using Custom Language Preferences in BBEdit preferences to set up the color scheme for each file type there.

But how to do this in Sublime Text 2?

Turns out, it’s easy. The file on the left is Ruby, the one on the right is Markdown (The sample code is Slogger by Brett Terpstra).

spacer

To achieve this, first install all the themes you may need. Obvious, I know!

spacer

Then set the default theme using Preferences / Color Scheme from the Sublime Text 2 menu. This sets the theme in the default preferences file which resides in ~/Library/Application Support/Sublime Text 2/Packages/User/Preferences.sublime-settings.

The theme line looks like this in the file:

1
2
3
4
5
{
  "color_scheme": "Packages/User/Railscasts.tmTheme"
}

Next, open a file type where you would like to use a different theme, for example, open a Markdown file. It will open using the default theme. Now choose Preferences / Settings More / Syntax Specific - User from the Sublime Text 2 menu. Sublime Text 2 will create a new settings file with the selected file type as its name (in my case, the markdown settings file is Markdown.sublime-settings). If the file already exists, Sublime will open it for editing.

Set the theme in this file, as well as any other settings you like for that file kind. For example, my Markdown.sublime-settings is:

1
2
3
4
5
6
{
  "color_scheme": "Packages/MarkdownEditing/MarkdownEditor.tmTheme",
  "line_numbers": false,
  "font_face": "Cousine",
  "font_size": 13
}

You may need to restart Sublime Text 2 after this.

Next time you open a file where you have set type-specific user settings, the theme specified will be used. Nice!

See also My Sublime Text 2 Setup. And follow me at @hiltmon on Twitter or @hiltmon on App.Net.

Rarely Microsoft Office

I rarely use Microsoft Office. There, I said it. And it’s true. There are electronic cobwebs on my copy. You may now run out of the room screaming.

For many, this is like saying I rarely bathe. I rarely use Microsoft Office because I have absolutely no reason to use it except for two specific cases.

Email, not Outlook

There are several reasons why I don’t use Outlook for email:

  • Messages are stored in a proprietary format that is not cross platform.
  • I don’t have an Exchange server (and even when I did, I enabled IMAP and did not use Outlook).
  • I believe in using an email client for email, a contact manager for contacts and a specialist calendar application for calendaring.
  • It’s slow, bloated, buggy as hell and not Mac-like at all.

Email to me is a means of communicating, its purpose is to engage people offline using written content. Outlook is a management tool for filing and managing documents, not communicating.

Writing, not Word

I write a lot. Blog posts, product documentation, proposals, notes, project logs, reports and the occasional letter. All writing. Writing is the activity of converting deep thoughts into readable and understandable sentences. I don’t use Microsoft Word for writing.

For notes, nvAlt; for project logs, VoodooPad; for short form writing, iA Writer or Byword; and for long form writing, Scrivener. These are all tools that support my precious Markdown and enable me to focus on writing.

The results of my writing are shared on the web in HTML form and everywhere else in PDF form. I never, ever send an editable document file to a client, because I have no idea what they will do to it and then send it on as if it was sent by me.

Microsoft Word can be good for formatting documents, but Apple’s Pages is cheaper, faster and way easier to use for this. Having said this, I mostly write in Markdown and use Marked with my own CSS to format and convert the document to PDF. Zero effort formatting.

Calculations, not Excel

If I need to make a quick calculation that I cannot do in my head, I use my trusty HP 12C calculator that’s always within reach. If the calculation is more complex, I use Soulver. In fact, most of the basic calculations, estimates and models I make are done in Soulver. I don’t need a full spreadsheet to do a few calculations with some “what-if” scenarios.

If I need to manipulate data, I use BBEdit for data that arrives in text format, or a database for larger sets. That’s what databases do, using a spreadsheet for a database is wrong. In fact, the majority of Excel files I get are just data tables that would be better off sent as CSV files.

If I do receive an Excel file, I use Numbers to open and view it. Numbers is not a fast spreadsheet product, but it is growing on me. On the rare occasions where I need to create a spreadsheet model for a client, which happens about once a year, I use Numbers, and send the model as a PDF.

Keynote, not Powerpoint

I make presentations, not Powerpoint decks. You’ve all seen them, Powerpoint documents printed out and bound as books. They look awful. If I need to make a booklet deliverable, I write the content in a writing tool and create the booklet in a proper text layout tool such as Adobe Indesign or Pages.

When I need to make a presentation, I create my slides in Keynote. This product is so much faster and looks better than Powerpoint. I also present using Keynote through a projector. If I need to make a handout to go with the presentation, that goes through the booklet process. I do not just print the slides and walk away. It looks terrible. And slides are only meaningful in the context of what I, the presenter, am saying at the time the slide is shown, and useless afterwards. I do not write my presentation in slides and read that out, that’s also wrong.

Special Cases

But there are times when Microsoft Office is needed, and over the past 2 years, these are the only two reasons I have used it:

  • Testing output from programs. I do have a few clients where I have written programs which generate Excel files for them. To ensure the Excel file works, I need to open it in Excel. One can never be sure that another spreadsheet product will be sufficiently compatible.
  • Dealing with Lawyers. Lawyers love to send agreements with change tracking on to show us, their clients, what work they have done. You really have no choice but to open these files in Word.

That’s all I found.

More Productive

Just because I almost never use Microsoft Office does not mean I am less productive. Instead, I find myself being way more productive. I manage my email better in Apple’s mail.app with plugins. I write better using Markdown and writing tools. I generate calculation models faster and better in Soulver. I create my presentations faster in Keynote. And I share everything in HTML or PDF.

I simply don’t need Microsoft Office. That’s $200 saved. And I see no reason to purchase the iOS version when it comes out next year.

Managing the Productivity Context

Productivity is doing less stuff to get more stuff done. Managing its context is the key to becoming more productive. Here is a framework to establish and manage your productivity context to maximize your productivity when using your computer, based on what I have been doing to manage my own on my Mac. Hang in there, this is a long one.

Read on →
← Older Blog Archives
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.