0% found this document useful (0 votes)
137 views

Snow SQL

This document contains code snippets for connecting to Snowflake and performing various operations using the Python Snowflake connector. It creates a database, warehouse, schema and table, inserts sample data, copies data from an internal stage file or S3, and shows how to asynchronously execute queries and get query results. It also demonstrates logging configuration for the Python connector.

Uploaded by

Durgesh Saindane
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
137 views

Snow SQL

This document contains code snippets for connecting to Snowflake and performing various operations using the Python Snowflake connector. It creates a database, warehouse, schema and table, inserts sample data, copies data from an internal stage file or S3, and shows how to asynchronously execute queries and get query results. It also demonstrates logging configuration for the Python connector.

Uploaded by

Durgesh Saindane
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 3

create or replace database sf_tuts;

select current_database(), current_schema();

create or replace table emp_basic (


first_name string ,
last_name string ,
email string ,
streetaddress string ,
city string ,
start_date date
);

create or replace warehouse sf_tuts_wh with


warehouse_size='X-SMALL'
auto_suspend = 180
auto_resume = true
initially_suspended=true;

select current_warehouse();

put file:///tmp/employees0*.csv @sf_tuts.public.%emp_basic; --for


linux using cli
put file://c:\temp\employees0*.csv @sf_tuts.public.%emp_basic; -- for
windows using cli
list @sf_tuts.public.%emp_basic; --list stage files

## add data into table from stage file location


copy into emp_basic
from @%emp_basic
file_format = (type = csv field_optionally_enclosed_by='"')
pattern = '.*employees0[1-5].csv.gz'
on_error = 'skip_file';

## add extra rows into tale


insert into emp_basic values
('Clementine','Adamou','cadamou@sf_tuts.com','10510 Sachs Road','Klenak','2017-9-
22') ,
('Marlowe','De Anesy','madamouc@sf_tuts.co.uk','36768 Northfield
Plaza','Fangshan','2017-1-26');

##select,like
select email from emp_basic where email like '%.uk';

##drop
drop database if exists sf_tuts;
drop warehouse if exists sf_tuts_wh;

-----------------------------------------------------------------------
USING PYTHON SNOWFLAKEE CONNECTOR

conn = snowflake.connector.connect(
user=USER,
password=PASSWORD,
--CREATE CONNECTION OBJECT
account=ACCOUNT,
warehouse=WAREHOUSE,
database=DATABASE,
schema=SCHEMA
)

conn.cursor().execute("CREATE WAREHOUSE IF NOT EXISTS


tiny_warehouse_mg")
conn.cursor().execute("CREATE DATABASE IF NOT EXISTS testdb_mg")
--CREATE WH,DBBBB,SCHEMA
conn.cursor().execute("USE DATABASE testdb_mg")
conn.cursor().execute("CREATE SCHEMA IF NOT EXISTS testschema_mg")

conn.cursor().execute("USE WAREHOUSE tiny_warehouse_mg")


conn.cursor().execute("USE DATABASE testdb_mg")
conn.cursor().execute("USE SCHEMA testdb_mg.testschema_mg")

conn.cursor().execute(
"CREATE OR REPLACE TABLE "
"test_table(col1 integer, col2 string)")
--CREATE TALE

conn.cursor().execute(
"INSERT INTO test_table(col1, col2) VALUES " +
" (123, 'test string1'), " +
" (456, 'test string2')")
-- INSERT DATA

# Putting Data
con.cursor().execute("PUT file:///tmp/data/file* @%testtable")
--INSERT THROUGH INTERNAL STAGE FILE
con.cursor().execute("COPY INTO testtable")

# Copying Data
con.cursor().execute("""
COPY INTO testtable FROM s3://<s3_bucket>/data/
STORAGE_INTEGRATION = myint
FILE_FORMAT=(field_delimiter=',')
""".format(
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY))

# Retrieving a Snowflake Query ID


cur = con.cursor()
cur.execute("SELECT * FROM testtable")
print(cur.sfqid)

import time
...
# Execute a long-running query asynchronously.
cur.execute_async('select count(*) from table(generator(timeLimit =>
25))')
...
# Wait for the query to finish running.
query_id = cur.sfqid
while conn.is_still_running(conn.get_query_status(query_id)):
time.sleep(1)
example:
from snowflake.connector import ProgrammingError
import time
...
# Wait for the query to finish running and raise an error
# if a problem occurred with the execution of the query.
try:
query_id = cur.sfqid
while
conn.is_still_running(conn.get_query_status_throw_if_error(query_id)):
time.sleep(1)
except ProgrammingError as err:
print('Programming Error: {0}'.format(err))

# Get the results from a query.


cur.get_results_from_sfqid(query_id)
results = cur.fetchall()
print(f'{results[0]}')

Logging
For example, to set the logging level to INFO and store the logs in a file
named /tmp/snowflake_python_connector.log:
logging.basicConfig(
filename=file_name,
level=logging.INFO)

# Logging including the timestamp, thread and the source code location
import logging
for logger_name in ['snowflake.connector', 'botocore', 'boto3']:
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG)
ch = logging.FileHandler('/tmp/python_connector.log')
ch.setLevel(logging.DEBUG)
ch.setFormatter(logging.Formatter('%(asctime)s - %(threadName)s %(filename)s:%
(lineno)d - %(funcName)s() - %(levelname)s - %(message)s'))
logger.addHandler(ch)

You might also like