MapD Core

The world's fastest in-memory GPU database powers the world's most immersive data exploration experience
What It Is

Designed from the ground up to run on GPUs, MapD Core is an in-memory, column store, relational database that can deliver exceptional speed at scale. By taking advantage of the parallel processing power of the hardware, MapD Core can can query billions of rows in milliseconds using standard SQL. Think of it as supercomputing for everyone - no GPU experience required.

Lightning Fast

From third parties to partners to our own internal benchmarks, MapD Core has proven time and time again that it is orders of magnitude faster than even the fastest legacy CPU solutions.
Think 147 billion rows per second fast and getting better every day. Think 75 times faster than 30 nodes of Redshift fast. Think change your business forever fast.

See Details
Hide Details
Data source: 10x copy of flights dataset (1.2B rows) at
Query 1 `select carrier_name, avg(arrdelay) from flights group by carrier_name`
Query 2 `select origin_name, dest_name, avg(arrdelay) from flights group by origin_name, dest_name`
Query 3 `select date_trunc(month,dep_timestamp) as ym, avg(arrdelay) as del from flights group by ym`
Query 4 `select dest_name, extract(month from dep_timestamp) as m, extract(year from dep_timestamp) as y, avg(arrdelay) as del from flights group by dest_name,y,m`
Query 5 `select count(*) from flights where origin_name='Lambert-St Louis International' and dest_name = 'Lincoln Municipal'`
System configurations
MapD: 1 machine (8 core, 384GB Ram, 2 x 2TB SSD, 8 Nvidia K40)
In-memory DB 1: 10 machines (16 core, 64GB Ram, EBS storage, m4,4xlarge)
In-memory DB 2: 3 machines (32 core, 244GB Ram, 2 x 320GB SSD, r3.8xlarge)
Hadoop OLAP: 10 machines (16 core, 64GB Ram, EBS storage, m4,4xlarge)
Speaks SQL Natively
MapD Core was built to execute the SQL your organization already knows but at speeds hundreds of times faster than CPU-based solutions. Filter, group, aggregate and join billions of rows of data in milliseconds, allowing for interactive ad-hoc exploration of the biggest datasets. And to make it easy to fit MapD Core into your existing data ecosystem, MapD Core supports a full battery of standard connectors, including JDBC, ODBC, Thrift, Kafka and Sqoop.
What Power Means
Freedom from Indexes
The parallel power of MapD means that users don't need to index their data. Queries are just effortlessly fast, no DBA required.
No Need to Downsample
MapD’s innovative approach to memory management enables billions of row of data to be scanned in milliseconds - eliminating the need to engage in risky downsampling.
Intelligent Scale
Scale up + out intelligently and optimize your price performance characteristics. A single server of MapD on GPUs does the work of dozens if not hundreds of CPU servers.
What Powers the Engine


Queries are compiled with a JIT compilation framework built on LLVM. This transforms query plans into compiled machine code for Nvidia GPUs and x64 CPUs. Compared to other leading in-memory CPU-based databases, which typically use interpreters or source-to-source compilers, MapD’s compiled queries offer speeds up to several orders of magnitude quicker than the competition.

Memory Management

MapD’s pioneering approach to memory management keeps the data either on the GPU or in close proximity using the copious memory footprint available from CPU. Further, using the latest SSD options, MapD can maintain its exceptional query speed in virtually any configuration. Finally, MapD’s scale out options allow enterprises to prioritize memory over computation by combining ultrafast VRAM across servers.

Hybrid execution

In addition to GPUs, MapD fully exploits the performance of CPUs, executing compiled queries simultaneously on both CPU and GPU. Queries too large for GPU memory can be executed entirely on the CPU. The MapD database uses the same infrastructure to parallelize computation across all CPUs as it does GPUs - meaning competition-beating performance even on CPU.

Iris Rendering Engine

The Iris Rendering Engine gives users the ability to visualize billions of records at the grain level by leveraging the native graphics pipeline of the server-side GPUs. By rendering the results of a SQL query in-situ (since the query results are already on the GPU), MapD obviates the need to send multi-gigabyte result sets from server to client, instead only needing to transfer a small PNG. The Iris Rendering Engine accepts a subset of the open-source Vega visualization API, providing a powerful and expressive way to generate pixel-perfect visualizations of any dataset. The MapD Immerse Visual Analytics System heavily leverages the Iris Rendering Engine and custom apps built on MapD Core can harness its power via the Vega API.

Schedule a Demonstration

Find out where speed might tip the competitive scales in your favor by getting a demonstration from our team of specialists.


Buy It On Prem Or Cloud

On premise or in the cloud, GPU hardware has become ubiquitous. MapD is supported by and partnered with some of the largest, most sophisticated providers in the market. Each logo above can be clicked to take you to the relevant page to construct an order, spin up an instance or learn more about how to deploy our software.

To get our whitepaper on how our technology leverages GPUs, tell us a little more about yourself and we will email it to you.
Please fill out the following to receive the 2016 Gartner Cool Vendors in DBMS report via email.