37
loading...
This website collects cookies to deliver better user experience
📝 Because the knowledge in this series builds with each post, in this post, we'll spend more time focusing on the how and only touch on the why when appropriate.
resume-uploader-starter
branch. From here, we can install the dependencies and run our application. 📝 We want users to authenticate to not only restrict who can upload/download files, but also in case resumes have the same name ie. resume.pdf. So we'll use their generated user ID as a prefix to their resume.
npm i aws-amplify @aws-amplify/ui-react && amplify init
n
**when asked if you would like to initialize amplify with the default configuration.build
to out
since NextJS will be building our site as static HTML files.📝 If you ever accidentally accept the default configuration or want to change it later, typing the command amplify configure project
will take you back to the prompt.
amplify add auth
Default Configuration
Username
No, I am done.
amplify add api
GraphQL
[enter] to select the default name
Amazon Cognito User Pool
No, I am done.
No
Single object with fields
Yes
type Candidate
@model
@auth(rules: [{ allow: owner, operations: [create, update] }]) {
id: ID!
name: String!
email: String!
resumeFile: String!
userIdentity: String!
}
@model
directive will automatically create our DynamoDB table and the corresponding CRUDL operations to interact with it.@auth
directive says that the only operations allowed on this API are the ability to create and update. Furthermore, those operations are scoped to the currently signed in user.id
, name
, email
, and resumeFile
.userIdentity
field. When a user is added to Cognito, a user ID is created for them. We are adding this to our API so that our lambda function (as well as employers in the future) will be able to access resumes. Note that this ID is not associated with our user's usernames or passwords 😉 Decreases the payload size of what we're storing in our database
We don't have to mess around with sending multi-part form data to our lambda
We have a dedicated space where emails are sent, as opposed to just an email inbox
amplify add storage
📝 Amplify comes with two primary types of storage: A database, and an S3 bucket.
Content
[enter] to accept the default
[enter] to accept the default
Auth users only
use the spacebar to select all options
No
Is triggered by the dynamoDB table associated with our API
Has access to the S3 bucket we just created
Has permission to send email with SES
amplify add function
Lambda function
"resumeFunc"
NodeJS
Lambda Trigger
Amazon DynamoDB Stream
Use API category graphql @model table
Configure Advanced Settings? Yes
"Y" to access other resources
[use spacebar to select storage]
[use spacebar to select our S3 bucket]
select "read"
"N" to not invoking on a recurring schedule
"N" to not enable lambda layers
"Y" to configuring environment variables
SES_EMAIL
[enter an email address you have access to]
"I'm done"
"N" we don't need to configure secret values
"Y" we want to edit the local function now
📝 If you think that was a lot of steps, try doing it manually!
📝 Once done, the CLI should provide you with a few environment variables that it generated: ENV
, REGION
, and _YOUR_BUCKET_
. Keep track of the bucket variable for now as we'll be needing that later.
event.Records
.const aws = require('aws-sdk')
const nodemailer = require('nodemailer')
const ses = new aws.SES()
const s3 = new aws.S3()
const transporter = nodemailer.createTransport({
SES: { ses, aws },
})
exports.handler = async (event) => {
for (const streamedItem of event.Records) {
if (streamedItem.eventName === 'INSERT') {
//pull off items from stream
const filename = streamedItem.dynamodb.NewImage.resumeFile.S
const candidateEmail = streamedItem.dynamodb.NewImage.email.S
const candidateName = streamedItem.dynamodb.NewImage.name.S
const candidateIdentity = streamedItem.dynamodb.NewImage.userIdentity.S
//change this to match your bucket name👇🏽
const RESUME_BUCKET = process.env.STORAGE_RESUMEBUCKET_BUCKETNAME
try {
//get record from s3
const resumeFile = await s3
.getObject({
Bucket: RESUME_BUCKET,
Key: `protected/${candidateIdentity}/${filename}`,
})
.promise()
//setup email with attachment
const mailOptions = {
from: process.env.SES_EMAIL,
subject: 'Candidate Resume Submission',
html: `<p>You can reach ${candidateName} at the following email: <b>${candidateEmail}</b></p>`,
to: process.env.SES_EMAIL,
attachments: [
{
filename,
content: resumeFile.Body,
},
],
}
//send email
await transporter.sendMail(mailOptions)
} catch (e) {
console.error('Error', e)
}
}
}
return { status: 'done' }
}
Configure our project: Here we're bringing in and setting up relevant packages. The nodemailer
package is a handy utility we'll install in a bit. This makes sending emails with attachments a bit simpler.
Grabbing the data we need from the event
Getting the relevant resume file. Note that our files are protected.
Setting up our email and sending the email with an attachment.
📝 If you have the AWS CLI installed, you can actually setup an email by typing the following command in your terminal and clicking the verification link sent to the provided email address:
aws ses verify-email-identity --email-address your-email@emai.com --region us-east-1 --profile=your-aws-profile
//from the root of your project
cd amplify/backend/function/YOUR_FUNC_NAME
-cloudformation-template.json
.lambdaexecutionpolicy
object:{
"Effect": "Allow",
"Action": "ses:SendRawEmail",
"Resource": "YOUR_SES_ARN"
}
lambdaexecutionpolicy
should look like the following screenshot:src
directory of our lambda function and install the nodemailer package:// assuming we're still in the amplify/backend/function/ourFunction directory:
cd src && npm i nodemailer
📝 We don't have to install the aws-sdk
unless we're testing our function locally. AWS already installs this package in the lambda runtime.
amplify push
Y
) and accept all of the default options.☕️ This will deploy our backend resources to the cloud, generate code for our API, and create an aws-exports
file containing our backend secrets (automatically added to .gitignore
).
_app.js
add the following snippet to connect our frontend to our Amplify backend:import Amplify from '@aws-amplify/core'
import config from '../src/aws-exports'
Amplify.configure(config)
We have insight into who is storing information in our S3 bucket
We can control who has access to view and upload items in S3
index.js
modify the top portion to look like the following snippet:import { AppHeader } from '../components/AppHeader'
import { withAuthenticator } from '@aws-amplify/ui-react'
function HomePage() {
return (
<>
<AppContainer>
<AppHeader />
<ResumeModal />
</AppContainer>
</>
)
}
export default withAuthenticator(HomePage)
//rest of code...
ResumeForm.js
add the following import statements:import { API, Storage, Auth } from 'aws-amplify'
import { createCandidate } from '../src/graphql/mutations'
Storage.configure({ level: 'protected' })
createCandidate
mutation that was generated automatically when we pushed up our schema.protected
.public: All files are stored at the same level. Accessible to all users.
protected: Files are separated by the user's Cognito identity ID. Anyone can read, but only the user can write.
private: Only accessible to the given user.
📝 In some cases, it might make sense for files to be public. What's important to remember is that files with the same name will overwrite one another, which would be bad in our application. Example: user1 uploads resume.pdf
, user2 uploads resume.pdf
. For that reason we use protected
.
handleResumeFormSubmit
. const currentCredentials = await Auth.currentCredentials()
const fileKey = await Storage.put(
formState.resumeFile.name,
formState.resumeFile
)
const response = await API.graphql({
query: createCandidate,
variables: {
input: {
name,
email,
resumeFile: fileKey.key,
userIdentity: currentCredentials.identityId,
},
},
})
📝 Because we configured S3 as protected
, it will automatically prepend our files with protected/{COGNITO_IDENTITY_ID}/
, however, we call Auth.currentCredentials()
to get the Cognito User Identity ID to send to our lambda function.