Aaronontheweb

Hacking .NET and Startups

Real-time Marketing Automation with Distributed Actor Systems and Akka.NET

July 24, 2014 04:42 by Aaronontheweb in .NET, MarkedUp, Akka // Tags: Akka, MarkedUp, In-app Marketing, Open Source // Comments (0)

I published a lengthy post on MarkedUp’s blog yesterday about the new product we recently released, MarkedUp In-app Marketing Automation for Windows Desktop (with support for Windows Store, Windows Phone, iOS, and Android planned) and how it uses Akka.NET’s distributed actor system to execute real-time marketing automation campaigns for our customers.

If you’re interested in learning how to use actors inside your application or want to know how they can help you build types of applications that are virtually impossible to build over simple HTTP APIs, then you should read it (click here!)

If you enjoyed this post, make sure you subscribe to my RSS feed!

ac4cab7f-e921-4f89-8016-8963930a40c0|2|5.0

Actions: Comments (0) | Submit to DotNetKicks...

Tradeoffs in High Performance Software

July 15, 2014 15:56 by Aaronontheweb in .NET, Open Source // Tags: Helios, Akka.NET, Performance // Comments (0)

I’ve spent down the past week tracking down an absolutely brutal bug inside Akka.NET. Sometimes the CPU utilization of the system will randomly jump from 10% to 100% and stay pegged like that until the process is recycled. No exceptions were thrown and memory / network / disk usage all remained constant (until I added monitoring.)

I couldn’t reproduce this CPU spike at all locally, nor could I determine the root cause. No one else had reported this issue on the Akka.NET issue list yet, probably because the product is still young and MarkedUp In-app Marketing is the only major commercial deployment of it that I know of (hopefully that will change!)

I had to hook up StatsD to MarkedUp’s production Akka.NET deployments to figure out what was going on ultimately1 – using that plus the combination of a DebugDiag dump I was able to isolate this down to a deadlock.

I didn’t have a meaningful stack trace for it because the deadlock occurred inside native CLR code, so I had to actually look at the public source for .NET 4.5 to realize that the issue was caused by an edge-case where a garbage collected Helios IFiber handle could accidentally make a BlockingCollection.CompleteAdding() call on the Fiber’s underlying TaskScheduler (possibly leaked to other Fibers) via the IFiber,Dispose() method which would gum up the works for any network connection using that Fiber to dispatch inbound or outbound requests.

Took a week to find that bug and a few hours to fix it. What a bitch.

Before I pushed any of those bits to master, however, I decided to run Visual Studio’s performance analysis toolset and test the impact of my changes – a mix of CPU performance and memory consumption testing. This revealed some interesting lessons…

Tradeoff #1: Outsourcing work to the OS vs. doing it yourself

First, the reason we had this performance issue to begin with: Helios is C# socket networking middleware designed for reacting quickly to large volumes of data. Inside a Helios networking client or a reactor developers have the opportunity to specify the level of concurrency used for processing requests for that particular connection or connection group – you can run all of your IO with maximum concurrency on top of the built-in thread pool OR you can use a group of dedicated threads for your IO. It was this latter type of Fiber that caused the problems we observed in production.

Why would you ever want to build and manage your own thread pool versus using the built-in one? I.E. manage the work yourself instead of outsourcing it to the OS?

In this case it’s so you can prevent the Helios IConnection object from saturating the general-purpose thread pool with thousands of parallel requests, possibly starving other parts of your application. Having a handful of dedicated threads working with the socket decreases the system’s overall multi-threading overhead (context-switching et al) and it also improves throughput by decreasing the amount of contention around the real bottleneck of the system: the I/O overhead of buffering messages onto or from the network.

In most of applications, outsourcing your work to the OS is the right thing to do. OS and built-in methods are performance-optimized, thoroughly tested, and usually more reliable than anything you’ll write yourself.

However framework and OS-designers are not all-seeing and don’t offer developers a built-in tool for every job. In the case of Helios, the .NET CLR’s general purpose thread-scheduling algorithm works by servicing Tasks in a FIFO queue and it doesn’t discriminate by request source. If a Helios reactor produces 1000x more parallel requests per second than the rest of your application, then your application is going to be waiting a long time for queued Helios requests (which can include network I/O) to finish. This might compromise the responsiveness of the system and create a bad experience for the developer.

You can try working around this problem by changing the priority of the threads scheduled from your application, but at that point you’ve already broken the rule of outsourcing your work to the OS – you’re now in control of scheduling decisions the moment you do that.

Taking the ball back from the OS isn’t a bad idea in cases where the built-in solution wasn’t designed for your use case, which you determine by looking under the hood and studying the OS implementation.

Another good example of “doing the work yourself” – buffer management, such as the IBufferManager implementation found in Jon Skeet’s MiscUtil library or Netty’s ByteBuffer allocators. The idea in this case: if you have an application that creates lots of byte arrays for stream or buffer I/O then you might be better off intelligently reusing existing buffers versus constantly allocating new ones.. The primary goal of this technique is to reduce pressure on the garbage collector (CPU-bound problem), which can become an issue particularly if your byte arrays are large.

Tradeoff #2: Asynchronous vs. Synchronous (Async != faster)

I have a love/hate relationship with the async keyword in C#; it is immensely useful in many situations and eliminates gobs of boilerplate callback spam.

However, I also hate it because async leads droves of .NET developers to believe that you can inherently make your application faster by wrapping a block of code inside a Task and slap an await / async keyword in front of it.2 Asynchronous code and increasing parallelism inside your application doesn’t inherently produce better results, and in fact it can often decrease the throughput of your application.

Parallelism and concurrency are tools that can improve the performance of your applications in specific contexts, not as a general rule of thumb. Have some work that you can perform while waiting for the result of an I/O-bound operation? Make the I/O call an asynchronous operation. Have some CPU-intensive work that you can distribute across multiple physical cores? Use N threads per core (where N is specific to your application.)

Using async / parallelism comes at a price in the form of system overhead, increased complexity (synchronization, inter-thread communication, coordination, interrupts, etc…) and lots of new classes of bugs and exceptions not found in vanilla applications. These tradeoffs are totally worth it in the right contexts.

In an age where parallelism is more readily accessible to developers it also becomes more frequently abused. And as a result: one of the optimizations worth considering is making your code fully synchronous in performance-critical sections. 

Here’s a simple example that I addressed today inside my fork of NStatsD which resulted in a 40% increase in speed.

This code pushes a UDP datagram to a StatsD server via a simple C# socket – the original code looked like this:

foreach (var stat in sampledData.Keys)
{
   var stringToSend = string.Format("{0}{1}:{2}", prefix, stat, sampledData[stat]);
   var sendData =  encoding.GetBytes(stringToSend);
   client.BeginSend(sendData, sendData.Length, callback, null);
}

The code here uses one of the UdpClient’s asynchronous methods for sending a UDP datagram over the wire – it takes this code about 2.7 seconds to transmit 100,000 messages to a StatsD server hosted on a VM on my development machine. Having done a lot of low-level .NET socket work with Helios, I’m suspicious when I see an asynchronous send operation on a socket. So I changed the call from BeginSend to just Send.

foreach (var stat in sampledData.Keys)
{
   var stringToSend = string.Format("{0}{1}:{2}", prefix, stat, sampledData[stat]);
   var sendData = Encoding.ASCII.GetBytes(stringToSend);
   client.Send(sendData, sendData.Length);
}

This code, using a fully synchronous send method, pushed 100,000 messages to the same StatsD server in 1.6 seconds – an improvement of 40%. The reason why? BeginSend operations force the OS to maintain some state, make a callback when the operation is finished, and release those resources after the callback has been made.

The Send operation doesn’t have any of this overhead – sure, it blocks the caller until all of the bytes are buffered onto the network, but there’s no context switching or caching objects to be included in the async state. You can validate this by looking at the .NET source code for sockets and compare the SendTo vs. BeginSendTo methods.

The point being: when you have thousands of write requests to a socket in a short period of time the real bottleneck is the socket itself. Concurrency isn’t going to magically make the socket write messages to the network more quickly.

Tradeoff #3: Memory vs. Throughput

I rewrote Helios’ DedicatedThreadFiber to use a BlockingCollection to dispatch pending work to a pool of dedicated worker threads instead of using a custom TaskFactory – a functionally equivalent but a mechanically significant change in how this Fiber’s asynchronous processing works. After making a significant change like this I test Helios under a heavy load using the TimeService example that ships with the Helios source.

Each TimeServiceClient instance can generate up to 3,000 requests per second and usually I’ll have 5-15 of them pound a single TimeServiceServer instance. On my first test run I noticed that the first TimeServiceClient instance’s memory usage shot up from about 8mb to 1.1GB in a span of 45 seconds. “Holy shit, how did I create such an awful memory leak?” I wondered.

Turns out that the issue wasn’t a memory leak at all – the TimeServiceClient could produce requests 2-3 orders of magnitude faster than its Fiber could process them (because, again, it was waiting on a socket), so the BlockingCollection backing the Fiber would grow out of control.

I decided to test something – if I capped the BlockingCollection’s maximum size and have it block the caller until space frees up, what would the impact on memory and performance be?

I capped the Fiber initially to 10,000 queued operations and I was surprised with the results – throughput of the system was the same as it was before but memory usage was only 5mb as opposed to 1.1GB. Sure, some of the callers would block while waiting for room to free up in the BlockingCollection but the system was still operating at effectively the same speed it was before: the maximum speed at which the outbound socket can push messages onto the network.

I made a choice to limit memory consumption at the expense of “on-paper” throughput, but it wasn’t much of a tradeoff at all.

Hopefully a theme is starting to emerge here: your system is only as fast as your slowest component, whether it’s the file system, the network, or whatever. Consuming all of your system’s memory in order to queue pending I/O socket requests faster doesn’t improve the speed of that slow component.

You’re better off blocking for a picosecond and conserving memory until that slow resource becomes available.

However, imagine if I had taken a different approach and decided to use more memory to group related socket messages in batches before we sent them over the socket. That might result in an improvement in total message throughput at the expense of increased memory consumption – I’d have to write some code to ensure that the receiver on the other end of the socket could break up the batched messages again, but there’s potential upside there.

Tradeoff #4: Heavyweight Resources vs. Lightweight Resources

Lightweight resources are primitives, structs, objects, and so forth. Heavyweight objects are threads, processes, synchronization mechanisms, files, sockets, etc… anything that is usually allocated with a resource handle or implements the IDisposable interface is a good candidate for a “heavyweight” object.

After I fixed the issue I described in tradeoff #3 – we had another memory-related performance problem inside our Executor object, used for executing operations inside a Fiber. When we terminate an Executor, we do it gracefully to give it some time to wrap up existing operations before we shutdown.

Helios offers two tools for “timed operations” – a lightweight Deadline class and a heavyweight ScheduledValue class. Both of these classes have the same goal: force some value to change after a fixed amount of time.

The Deadline class accomplishes this by comparing its “due time” with DateTime.UtcNow – if DateTime.UtcNow is greater than due time then the Deadline is considered “overdue” and starts evaluating to false.

The ScheduledValue is a generic class backed by a built-in .NET Timer object – you tell it to change its value from A to B after some amount of time, which the Timer does automatically once it elapses.

We use Deadlines most frequently inside Helios because they’re simple, immutable, and don’t pose any risk of a resource leak. Deadline is what we were using inside the Executor object’s graceful shutdown mechanism.

Every time a Fiber dequeues an operation for execution it checks the Executor.AcceptingJobs property (which is determined by the Executor’s internal DeadLine) – if this expression evaluates to false, the Fiber stops processing its queue and releases its threads. In other words, this property is evaluated hundreds of thousands of times per second – and as my performance profiling in Visual Studio revealed, generated a TON of GC pressure via all of its DateTime.UtcNow calls, which allocates a new DateTime structure every time.

I tried, in vain, to rewrite the Deadline to use a Stopwatch instead but wasn’t able to get reliable shutdown times with it. So I decided to use our heavyweight ScheduledValue class in lieu of Deadline – it uses a Timer under the hood but doesn’t allocate any memory or use any synchronization mechanisms when its value is polled.

This resulted in a significant drop in memory usage and garbage collection pressure for high-volume Fibers, and the ScheduledValue cleans up after itself nicely.

Even though the ScheduledValue requires more upfront resources than a Deadline, it was the superior performance tradeoff because it doesn’t have any additional overhead when its value is frequently polled by executing Fibers.

If the scenario was a little different and we had to allocate a large number of less-frequently polled objects, the Deadline would win because it wouldn’t require a large allocation of threads (used by the Timer) and heavyweight resources.

Wrapping Up

High-performance code isn’t black magic – in a general sense it comes down to the following:

  • Correctly identifying your bottlenecks;
  • Understanding the limitations and design considerations of built-in functions;
  • Identifying where work can effectively be broken into smaller parallelizable units; and
  • Making resource tradeoffs around those limitations, bottlenecks, and opportunities for beneficial parallelism.
    One thing I have not talked about is the cost of developer time vs. application performance, which is the most important resource of all. I’ll save that for another post, but as a general rule of thumb: “don’t performance-optimize code until it becomes a business requirement.”

1I open sourced our Akka.NET + StatsD integration into a NuGet package – Akka.Monitoring

2I complained about this habit before in “10 Reasons Why You’re Failing to Realize Your Potential as a Developer”

If you enjoyed this post, make sure you subscribe to my RSS feed!

203e38bf-1ad9-4038-bea6-e210fddb3d22|3|5.0

Actions: Comments (0) | Submit to DotNetKicks...

The Profound Weakness of the .NET OSS Ecosystem

July 3, 2014 13:58 by Aaronontheweb in .NET, Open Source // Tags: Akka.NET, Helios, Open Source // Comments (39)

I’m in the process of writing up a lengthy set of blog posts for MarkedUp about the work that went into developing MarkedUp In-app Marketing, our real-time marketing automation and messaging solution for Windows desktop applications (and eventually WP8, WinRT, iOS, Android, Web, etc…)

During the course of bringing this product to market, I personally made the following OSS contributions:

  • Adding full CQL3 collections support to FluentCassandra, a project which Nick Berardi created and I am now maintaining;
  • Inventing Helios, a high-performance TCP/UDP socket server middleware developed in the style of Java’s wildly successful Netty project;
  • Co-creating Akka.NET, a C# port scala’s Akka project for distributed actor systems – I was largely responsible for writing the remoting layer, finite state machines, actor system extensions, and lots of other additions here and there;
  • A reliable inter-process file-lock implementation in C#, used by our message delivery client in Windows;
  • And a couple of native C projects we open sourced for doing Visual C++ runtime dependency detection and logging on Win32.
    There’s more to come – I just finished a Murmur3 hash implementation this week (not yet OSS) and I’m starting work on a C# implementation of HyperLogLog. Both are essential ingredients to our future analytics projects at MarkedUp.

I really enjoy contributing to OSS, especially on Akka.NET. It’s a successful project thus far, attracting several strong contributors (Roger Alsing, the developer who originally started Akka.NET, is a genius and overall great person) and lots of support from big names in .NET like Don Syme and others.

But here’s the kicker – MarkedUp is a very small company; we’re operating a business that depends heavily on doing distributed computing in .NET; and I have lots of tight deadlines I have to hit in order to make sales happen.

I didn’t make any of these contributions because they made me feel all tingly and warm inside – I don’t have the free time for that any more, sadly. I did it because it was mission-critical to our business. So here’s my question: in the ~15 year history of .NET, no one built a reactive, server-side socket library that’s actively maintained? It’s 2014 for fuck’s sake.

Over the course of working on this product, I was constantly disappointed by a .NET OSS landscape littered with abandoned projects (looking at you, Kayak and Stact) and half-assed CodeProject articles that are more often factually wrong than not.

This week when I started work on HyperLogLog1 I fully expected that I was going to have to implement it myself. But I was incredulous when I couldn’t find a developer-friendly implementation of Murmur3 in C# already. The algorithm's been out for over three years – I could find a dozen decent implementations in Java / Scala and two barely-comprehensible implementations in C#, neither of which had an acceptable license for OSS anyway. I had an easier time porting the algorithm from the canonical C++ Murmur3 implementation than I did following the two C# examples I found online.

So I had to ask myself:

Do .NET developers really solve any hard problems?

We came to the conclusion early on that the right architecture for MarkedUp In-app Marketing depended on a successful implementation of the Actor model. We thought to ourselves “this is a programming model that was invented in the early 70s – surely there must be a decent actor framework in .NET we could leverage for this.”

We weren’t just wrong, we were fucking wrong. We found a graveyard of abandoned projects, some half-assed Microsoft Research projects that were totally unusable, and a bunch of conceptual blog posts. Nothing even remotely close to what we wanted: Akka, but for .NET.

So we spent two weeks evaluating migrating our entire back-end stack to Java – we already depend heavily on Cassandra and Hadoop, so being able to take advantage of 1st party drivers for Cassandra in particular really appealed to us. However, we decided that the cost of migrating everything over to the JVM would be too expensive – so we went with an slightly less expensive option: porting Akka to .NET ourselves.

Why in the hell did it fall on one developer working at a startup in LA and one independent developer in Sweden to port one of the major, major cornerstones of distributed computing to .NET? Where the hell was Project Orleans (released by Microsoft AFTER Akka.NET shipped) five years ago?

Did none of the large .NET shops with thousands of developers ever need a real-time distributed system before? Or how about a high-performance TCP socket server? No? What the hell is going on?

.NET is the tool of choice for solving rote, internal-facing, client-oriented problems

There’s a handful of genuinely innovative OSS projects developed by .NET programmers – and they largely deal with problems that are narrow in scope. Parsing JSON, mapping POCOs, dependency injection, et cetera.

The number of projects like MassTransit, i.e. projects that address distributed computing, are rare in .NET – I can’t even find a driver for Storm in C# (which should be easy, considering that it uses Thrift.)

The point I made four years ago this very day about why .NET adoption lags among startups is still true – .NET is the platform of choice for building line of business apps, not building customer-facing products and services. That’s reflected strongly in it’s OSS ecosystem.

Need a great TCP client for Windows Phone? Looks like there are plenty of those on NuGet. Or a framework for exporting SQL Server Reporting services to an internal website? A XAML framework for coalescing UI events? .NET OSS nails this.

The paramount technical challenge facing the majority of .NET developers today looks like building Web APIs that serve JSON over HTTP, judging from the BUILD 2014 sessions. Distributed computing, consistent hashing, high availability, data visualization, and reactive computing are concepts that a virtually absent from the any conversation around .NET.

And this is why we have a .NET OSS ecosystem that isn’t capable of building and maintaining a socket server library, despite being the most popular development platform on Earth for nearly 10 years2.

Compare this to the Java ecosystem: virtually every major .NET project is a port of something originally evented for the JVM. I’m looking at you, NAnt, NUnit, NuGet (Maven), NHibernate, Lucene.NET, Helios, Akka.NET, and so on.

You know what the difference is? There’s a huge population of Java developers who roll hard in the paint and build + open source hard shit. The population of people who do this in .NET is miniscule.

We are capable of so much more than this

The tragedy in all of this is that .NET is capable of so much more than the boring line of business apps Microsoft’s staked it’s business on and CRUD websites.

The shortcomings of .NET’s open source ecosystem are your fault and my fault, not Microsoft’s. Do not point the finger at them. Actually, maybe blame them for Windows Server’s consumer-app-hostile licensing model. That merits some blame.

But the truth is that it’s our own intellectual laziness that created a ghetto of our OSS ecosystem – who’s out there building a lightweight MMO in .NET? Minecraft did it with Java! How about a real-time messaging platform – no, not Jabbr. I mean something that connects millions of users, not dozens.

We don’t solve many hard problems – we build the same CRUD applications over and over again with the same SQL back-end, we don’t build innovative apps for Windows Phone or Windows 8, and there’s only a handful of kick-ass WPF developers out there building products for every day consumers. Looking at you, Paint.NET – keep up the good work.

Don’t make another shitty, character-free monstrosity of a Windows Phone application – no, it’s not“new” because you can write it using WinJS and TypeScript instead of C#.

Do something mind-blowing instead – use Storm to make a real-time recommendation app for solving the paradox of choice whenever I walk into a 7-11 and try to figure out what to eat.

There are brilliant developers like the Akka.NET contributors, the Mono team, Chris Patterson, and others who work on solving hard problems with .NET. YOU CAN BE ONE OF THEM.

You don’t need Ruby, Node.JS, or even Java to build amazing products. You can do it in .NET. Try a new way of thinking and use the Actor model in Akka.NET or use the EventBroker in Helios. Figure out how to fire up a Linux VM and give Cassandra or Storm or Redis a try. Learn to how work with raw byte streams and binary content. Learn how to work with raw video and audio streams.

The only way to clean up our acts and to make an ecosystem worth keeping is to stop treating our tools like shit shovels and start using them to lay alabaster and marble instead. .NET is more than capable of building scalable, performant, interesting, consumer-facing software – and it falls on us, individual and independent developers, to set the example.


1here’s the only known HyperLogLog implementation in C#, which appears factually correct but out of date and not production-usable

2no, not going to bother citing a source for this. Get over it.

If you enjoyed this post, make sure you subscribe to my RSS feed!

932286de-dc6f-402e-ba00-d2e13192e03c|13|4.8

Actions: Comments (39) | Submit to DotNetKicks...

Business to Business Services Are What Will Make Dogecoin Succeed

March 16, 2014 08:45 by Aaronontheweb in Cryptocurrency // Tags: cryptocurrency, dogecoin // Comments (1)

spacer Following on from my previous post about the second / third generation cryptocurrencies advancing the start of the art, I’ve spent a lot of time participating in /r/dogecoin on Reddit and seeing dozens of new businesses start accepting Dogecoin every day.

I’ve mined a fair bit of Dogecoin so far (albeit on laptops, not dedicated mining rigs) and purchased a substantial amount, so I’ve been looking for some ways to spend or invest it.

Here’s some of the purchasing that I’ve done or am considering at the moment:

  • Purchased two music albums sold by individual artists directly to end users, for about 3000 doge combined;
  • Sponsored an ACM hackathon at University of Virginia for about 1000 doge – sentimental for me since I was an ACM chapter president when I was in college;
  • Considered buying some awesome subscription coffee from Black Market Beans with Dogecoin, but I unfortunately don’t have the necessary brewer for it; 
  • Started purchasing some Keurig K-cups from a distributor for Dogecoin through /r/Dogemarket; and
  • Going to purchase some Steam games from DogeKeys as soon as I get some more free time.

There’s a steady clip of new merchants and stores announcing support for Dogecoin on Reddit and elsewhere every day – to the tune of a couple dozen per day based on my casual observation of the forums.

One thing that 100% of these stores have in common is that the goods they’re pricing in Dogecoin are consumer goods, sold directly to individuals. And that’s probably the natural place for a new type of payment technology to begin getting traction, since the barriers to entry are fewest.

However, there are millions of consumer businesses that simply won’t take the currency risk of accepting Dogecoin (or Bitcoin) directly – they might try using a service like Coinbase or BitPay, which mitigate the currency risk by just pricing everything indexed to some fixed fiat (USD, EUR, etc) value and send the fiat back to sellers immediately once the transaction clears.

But the core problem that early crypto-accepting businesses like Overstock solve with Coinbase et al today is that they can’t cover any of their business costs with Dogecoin or any other cryptocurrency. If I’m Overstock’s CEO, I might be able to accept cryptocurrency from customers but I can’t use it to:

  • Pay the salaries of my employees;
  • Pay for critical infrastructure costs like large-scale hosting, CDNs, etc;
  • Pay for the COGS of the inventory sold to customers, including shipping and so forth;
  • Pay for marketing and advertising needed to drive new users to the site and grow the business;
  • Pay taxes and licensing fees;
  • Pay for service providers like accountants and attorneys;
  • Pay for the rainbow of various insurance premiums that every business has to have; and
  • Pay for real-estate and other capital expenditures.

All of these expenses are really business-to-business transactions, if you consider employees as sole proprietor service providers for the sake of this example.

 

Not being able to pay for any of those things without having to convert Dogecoin back into USD and being subject to rapidly changing market conditions is the real-world definition of cryptocurrency risk. And so the strategy of choice for mitigating it is to simply never pay for any of these services in Dogecoin – set the price for all of your goods in USD and let the customers take the currency risk at the point of sale.

However, that strategy ultimately limits the utility and value of the cryptocurrency to business-to-consumer transactions only – still really useful, but also makes the USD / Dogecoin exchange rate the sole determinant of the currency’s value, since prices for goods are still really set in USD.

 

An alternative and much more interesting approach is to start selling business-to-business services in Dogecoin directly. It’s a process that will take much more time to develop (decades) but it ultimately leads to a path where the value of Dogecoin goods can be wholly priced in cryptocurrency. And therefore: leads to a day when the value of Dogecoin is based off of its buying power indexed to real goods and services, not speculation.

I run a B2B startup at the moment - MarkedUp Analytics; if I could pay my (considerable) Amazon Web Services bill in Dogecoin alone and there were tools available to make it easy for my accounting firm to reconcile Dogecoin-based transactions in Quickbooks, I’d be in.

Growing a B2B Dogecoin ecosystem from the ground up can be bootstrapped. It starts with a few SaaS products that make it easy t

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.