Importing Data

Length: 00:12:10

Lesson Summary:

In this lesson, we will cover how to bulk import data into a Cloud SQL session, including SQL dump and CSV files. Detailed steps of what was performed in the lesson are below:

We will be using a sample data GitHub repo here:

The original source of this data from Google's GitHub can be viewed here:

  1. Clone our sample data to Cloud Shell:

    • Open Cloud Shell.
    • Clone the data from repo to Cloud Shell.
    • git clone
  2. Create the Cloud Storage bucket to copy data:

    • gsutil mb -l (your_region) gs://(bucket_name)
  3. Browse to the CloudSQL sample data directory:

    • cd /googledataengineer/CPB100/lab3a/cloudsql
  4. Copy all data into your cloud bucket:

    • gsutil cp * gs:// (bucket_name)
  5. Import an SQL dump file into Cloud SQL:

    • From Cloud SQL, click on the instance and click the Import button
    • Click Browse, select bucket, browse to table_creation.sql, and click Select.
    • Click Import.
  6. Import CSV tables into SQL database (recommendation_spark):

    • From Cloud SQL, click on the instance and click the Import button.
    • Click Browse, select bucket, browse to accommodation.csv, and click Select.
    • Expand advanced options, and from the Database drop-down menu, select recommendation_spark.
    • Set the Table name to Accommodation.
    • Click Import.
    • Perform the same actions for the rating.csv file as well, setting the Table name to Rating.
  7. Connect to your Cloud SQL instance:

    • Click Connect using Cloud Shell.
    • In Cloud Shell, hit enter once the command is populated.
    • Enter the root password when prompted.
  8. View tables and table data:

    • Switch to database:
      • use recommendation_spark;
    • View tables:
      • show tables;
    • View contents of one of the tables:
      • select * from Rating;

This lesson is only available to Linux Academy members.

Sign Up To View This Lesson
Or Log In

Looking For Team Training?

Learn More