Our research, knowledge, thoughts, and recommendations about building and leading businesses on the Internet.
Airtable is a relatively new Software-as-a-Service (online software, more commonly known as SaaS Software) to enter the arena for business users. I first saw it when I was looking for process management tools and a search result showed it in comparison to Trello. I was intrigued because I love Trello for what it does for me personally and what it has done for our team. This post is first in a series which will focus on Airtable and how to use it in a modern Enterprise. The Modern Enterprise is an organization or team that uses the Internet and online business software to organize people, processes, information and systems to achieve their goals. We chose to start with Airtable because it hits home for a basic and fundamental need in business which Excel, Google Spreadsheets, and others have thus far met fairly well. If you are no novice to organizing information, you know what I’m talking about. I once knew a professor who said his $500 million / year professional service business was run by his COO on Excel.
Business users have been using spreadsheets to organize information for decades. Excel is probably one of the most used business tools in the world because of its versatility by way of simple columns and rows and power by way of formulas and macros. Today, there are many alternatives to Excel that users can use online. Here are a few which I’ve used and think are good for the most part.
Google Spreadsheets – We use these extensively.
SmartSheets – I have used it in the past, and think it does some things well.
Microsoft Excel 365 – In addition to it _being_ Excel, syncing with iOS apps, it can also work with Microsoft’s Big Data product HDInsight.
These tools are all great and I don’t want to take away from them, but Airtable is another beast altogether. What drew me to Airtable was its simplicity. It looks simple. It feels simple. It _is_ simple. Having had, ahem, some, ahem, experience in databases and online software, I knew how to start using it pretty quickly. I knew I could use it, but I wanted to see if someone else could use it. I asked one of our Project Managers, Danielle, to make an Airtable to track the status of our clients, which ones were on subscription with us vs. working with us on an ad-hoc basis. Danielle is an extremely intelligent and organized team member but she’s not a technologist per se. She’s the model for what I call a technology empowered team member. Danielle had never used Airtable before but was and is very adept at using spreadsheets to track projects and project finances. She was able to whip something up in no-time.
Here are my initial thoughts which will guide my evaluation of Airtable for The Modern Enterprise in four upcoming articles (see below).
Here are some great links to get you started until the next few posts on Airtable.
This article is part of a larger series:
Check Airtable out here!
Disclaimer: We aren’t affiliated with Airtable and nor are we getting anything in return for this, we just love it when there’s awesome new technology out there that solves a number of pain points elegantly.
I had the pleasure this past Wednesday of introducing Eric Pugh (@dep4b) to the Data Wranglers DC Meetup group. He spoke about using Solr and Zeppelin in data processing and; specifically, the ways big data can easily be processed and displayed as visualizations in Zeppelin. Also broached was Docker, an application Anant uses, and its role in setting up environments for data processing and analysis. Unfortunately, no actual blimps or zeppelins were seen during the talk, but the application of data analysis to events they usually fly over was presented on last month during a discussion about Spark, Kafka, and the English Premier League.
Instead of trying to completely rehash Eric’s presentation, please check out his material for yourself (available below). In short, he showed how multiple open-source tools can be used to process, import, manipulate, visualize, and share your information. More specifically, Spark is a fast data processing engine which you can use to prepare your data for presentation and analysis. Whereas, Zeppelin is a mature, enterprise-ready application; as shown by its recent graduation from Apache’s Incubator Program; and is a great tool to manipulate and visualize processed data.
Please don’t hesitate to reach out with any questions or if you are interested in participating or speaking at a future Data Wranglers DC event. Each event is recorded, livestreamed on the Data Community DC Facebook page and attended by 50 or more individuals interested in data wrangling, data processing, and possible outcomes from these efforts. After the monthly event, many members continue their discussions at a local restaurant or bar.
I hope to see you at an event in the near future!
A: To make it easy to package and ship code.
Docker can be your assembly line for software production. If you’re building software with complex architecture, using software like Docker can significantly reduce the time for software development, testing, and deployment through the use of “containers”. For the client, this approach can significantly reduce software development costs, accelerate delivery cycles and launch times of ideas, and potentially decrease coding errors that hinder your services and hurt the bottom line. Docker is used by thousands of companies as part of their DevOps processes and its adoption is expected to continue to grow. Here are a few examples: Red Hat, Rackspace, Spotify, and more!
One of Docker’s great attributes is its ‘malleability.’ You can rapidly build things and tear them down if needed, enabling you to nimbly adapt when an urgent application deployment arises. Docker’s greatest utility is in situations where a client or project needs to quickly stand up a developing and testing environment, an application, and all the associated dependencies. We plan to show how we are using Docker internally and externally to service both our clients’ and our needs.
To a technologist, the beneficial aspects of Docker are clear; however, there are other benefits that accrue to the end user as well. First, the end user more than likely will not need to modify their hardware and software setup to accommodate Docker assembled applications. Applications will not force the user to restart the whole application or worry about ‘fluff’ filling their servers. Finally, the isolation feature inherent with Docker containers walls off the application while also reducing the draw on computing power providing some additional security advantages and better overall performance of computing platforms.
The workshop on 12/17 will focus on the technical side of using Docker. We will present Docker basics for both Linux and Windows, as well as doing a review of Docker Compose and other Docker tools. While we want everyone to know and share about how to use these features, we hope you can explain why Docker is being adopted and how end users also benefit.
Here at Anant we are very interested in data wrangling (aka data munging), which basically means, we want to be able to help people take data in one format and convert it to a form that best suits their needs. One way we keep up to date is through the excellent Data Wranglers DC group that meets monthly here in Washington.
At the most recent meeting, the group tackled the challenge of integrating real-time video and data streams. Mark Chapman, who is a Solutions Engineering Manager at Talend, explained how his company utilized Spark and Kafka in their product to analyze real time data in the English Premier League (EPL). In addition to the video inputs at 25 frames per second from cameras throughout the stadium, the stream was correlated to data connected to players’ heart rates and other measurements. The EPL is then able to overlay this information into replays to improve presentation and analysis as well as send data to companies offering in-game wagers.
The presentation was very interesting and Mark graciously shared his slides:
If you are in any way interested in data wrangling – just like it sounds, getting data under control and to work for you – we would love to hear from you and let you know what might be possible with your data streams. If you are in DC and are interested in the technical side of data munging, please come out to the next event and meet us. This past presentation was hosted by ByteCubed (@bytecubed), in Crystal City, but the gatherings have been in Foggy Bottom as well.