amazon web services - Independent python subprocess from AWS Lambda function -
i have created lambda function (app1) reads , writes rds.
my lambda function written in python2.7 , uploaded zipped package.
i created , tested zipped package on ec2 instance in same vpc rds , lambda function.
next, added functionality lambda function popen independent subprocess (app2) using subprocess.popen , had app1 return while app2 subprocess continued on own. tested app1 return handler's output while app2 continued putting 60 second sleep in app2 , tailed output file of app2.
i tested app1 , app2 functionality in ec2 instance.
after uploading new package, app1 appears behave expected, , returns handler's output immediately, app2 functionality doesn't "appear" instantiated, there no logs, errors, or output capture app2.
in app1, tested subprocess worked performing subprocess.check_output(['ls','-la']) prior , after independent subproccess.popen, , local folder displayed files. except there isn't app2output file created expected.
two questions
- is there special missing in aws-lambda concepts causing app2 "fail"? "fail" mean not writing creating new file , writing it, nor creating logs in cloudwatch same way app1 does, nor printing out lambda console app1 does.
- how catch output (logging info , errors) app2 in aws-lambda environment?
app1.py
import subprocess import sys import logging import rds_config import pymysql #rds settings rds_host = "rdshost" name = rds_config.db_username password = rds_config.db_password db_name = rds_config.db_name port = 3306 logger = logging.getlogger() logger.setlevel(logging.info) server_address = (rds_host, port) try: conn = pymysql.connect(rds_host, user=name, passwd=password, db=db_name, connect_timeout=5) except: logger.error("error: unexpected error: not connect mysql instance.") sys.exit() def handler(event, context): cur = conn.cursor() isql = "insert ..." cur.execute(isql) conn.commit() newid = cur.lastrowid cur.close() args = [str(newid),str(event['name'])] logger.info('args: '+str(args)) print 'pwd: ' output = subprocess.check_output(['pwd']) print output print 'ls -la' output = subprocess.check_output(['ls','-l']) print output pid = subprocess.popen([sys.executable, "app2.py"]+args, stdout=subprocess.pipe, stderr=subprocess.pipe, stdin=subprocess.pipe) logger.info('pid: '+str(pid)) output = subprocess.check_output(['ls','-l']) print output return "{'status':'success','newid':'"+str(newid)+"'}";
the output "logger.info('pid: '+str(pid))" in app1.py
is like: "pid: <subprocess.popen object @ 0x7f51aba2a550>"
app2
import sys import logging datetime import datetime import time fo = open('app2output','a+') fo.write("starting with: "+str(sys.argv)+"\n") logger = logging.getlogger() logger.setlevel(logging.info) logger.info("starting with: "+str(sys.argv)+"\n") #log accumulated processing time t1 = datetime.now(); sleep(60) t2 = datetime.now(); tstring = "{'t1':'"+str(t1)+"','t2':'"+str(t2)+"','args':'"+str(sys.argv[1])+"'}" logger.info(tstring+"\n") fo.write(tstring+"\n") fo.close() sys.exit()
the aws lambda environment terminated handler function returns. can't run subprocesses in background in aws lambda environment after handler function complete. need code lambda function wait subprocess complete.
Comments
Post a Comment