Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Received malformed response from transform AWS::Serverless-2016-10-31 #765

Closed
orr-levinger opened this issue Nov 25, 2020 · 21 comments
Closed
Labels

Comments

@orr-levinger
Copy link

orr-levinger commented Nov 25, 2020

Description

template wasnt changed at all just redeployed and got this error.
there is no information in the logs so cant really understand whats wrong..

Reproduction Steps

same as #761

Serverless: Typescript compiled.
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Installing dependencies for custom CloudFormation resources...
Serverless: Installing dependencies for custom CloudFormation resources...
Serverless: [serverless-plugin-split-stacks]: Summary: 17 resources migrated in to 2 nested stacks
Serverless: [serverless-plugin-split-stacks]: Resources per stack:
Serverless: [serverless-plugin-split-stacks]: - (root): 195
Serverless: [serverless-plugin-split-stacks]: - APINestedStack: 10
Serverless: [serverless-plugin-split-stacks]: - PermissionsNestedStack: 7
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service sls-blind-chat.zip file to S3 (23.86 MB)...
Serverless: Uploading custom CloudFormation resources...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
CloudFormation - UPDATE_IN_PROGRESS - AWS::CloudFormation::Stack - sls-blind-chat-dev-stack
CloudFormation - UPDATE_IN_PROGRESS - AWS::CloudFormation::Stack - sls-blind-chat-dev-stack
CloudFormation - UPDATE_ROLLBACK_IN_PROGRESS - AWS::CloudFormation::Stack - sls-blind-chat-dev-stack
CloudFormation - UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS - AWS::CloudFormation::Stack - sls-blind-chat-dev-stack
CloudFormation - UPDATE_ROLLBACK_COMPLETE - AWS::CloudFormation::Stack - sls-blind-chat-dev-stack
Serverless: Operation failed!
Serverless: View the full error output: https://us-east-1.console.aws.amazon.com/cloudformation/home?region=us-east-1#/stack/detail?stackId=arn%3Aaws%3Acloudformation%3Aus-east-1%3A956109742295%3Astack%2Fsls-blind-chat-dev-stack%2Fcac3ba10-29e0-11eb-b791-0a73682547f5

Serverless Error ---------------------------------------

An error occurred: sls-blind-chat-dev-stack - Received malformed response from transform AWS::Serverless-2016-10-31.

This is a 🐛 bug-report

@orr-levinger orr-levinger added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Nov 25, 2020
@MatiasSasRise
Copy link

Same issue here: us-east-1

@agustinezequielgomez-rise

I'm having the same issue on us-east-1

@agworkscode
Copy link

Same issue here. us-east-1

@ashishdhingra
Copy link
Contributor

Hi,

Good morning.

Looks like there is a service outage for us-east-1 region. Service teams are working on it and the issue should be resolved soon.

Thanks,
Ashish

@KitFristo
Copy link

I am also seeing the same issue in us-east-1.

@MattCopenhaver
Copy link

Same issue here.

@sabari-karthik
Copy link

+1

@rahulict
Copy link

Same issue here.

@CJX3M
Copy link

CJX3M commented Nov 25, 2020

Same issue here, deploying some lambdas

@mochi-co
Copy link

Same issue here also, deploying Go lambdas

@ItayGoren
Copy link

same issue - python lambdas

@jewelsjacobs
Copy link

➕ 1

@slitsevych
Copy link

apparently us-east-1 region is not having a good time right now.

  • CloudWatchEvents operational issue
  • Cognito operational issue
  • Cloudwatch operational issue
  • Kinesis operational issue:

[08:12 AM PST] Kinesis Data Streams customers are still experiencing increased API errors. This is also impacting other services, including ACM, Amplify Console, API Gateway, AppStream2, AppSync, Athena, Cloudformation, Cloudtrail, CloudWatch, Cognito, Connect, DynamoDB, EventBridge, IoT Services, Lambda, LEX, Managed Blockchain, Resource Groups, SageMaker, Support Console, and Workspaces. We are continuing to work on identifying root cause.

@KitFristo
Copy link

As @slitsevych pointed out, that region is having a lot of outages. You can find them here, although the outages are also affecting the dashboard it seems.

@AdamT213
Copy link

same here, ruby and node.js lambdas

@pam81
Copy link

pam81 commented Nov 25, 2020

+1

@ashishdhingra
Copy link
Contributor

Hi All,

There is a comment in the related old issue #761 that the problem is fixed. Please verify the same and confirm if we could close this issue.

Thanks,
Ashish

@ashishdhingra ashishdhingra added the response-requested Waiting on additional info and feedback. Will move to close soon in 7 days. label Nov 25, 2020
@KitFristo
Copy link

KitFristo commented Nov 26, 2020

@ashishdhingra I'm still seeing the issue as of 11:18 PM EST in us-east-1.

@manisha1895
Copy link

Worked for me now, able to deploy sam stack in us-east-1 region

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to close soon in 7 days. label Nov 27, 2020
@ashishdhingra
Copy link
Contributor

As per service dashboard at https://status.aws.amazon.com/, everything appears to be running normally.

@github-actions
Copy link
Contributor

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests