![]() |
From today, you can connect Amazon S3 access points to Amazon FSX Amazon FSX file systems to get file data as if it were in Amazon Simple Storage Service (Amazon S3). With this new ability, your data in the FSX for Openzfs is accessible for use with wide rage and Amazon Web Services (AWS) and artificial intelligence, machine learning (ML) and analysts that work with S3. Your file data continues to be based in your FSX for OPENSFS file system.
Organizations store Hudreds of Exabus on premieres files and want to move these data to AWS for Gread agility, livability, security, scaleability and reduced costs. Once their file information is in AWS, the organization often wants to do it even more. For example, they want to use their business data to increase generative AI applications and create machine learning models with a wide range of generative AI and AWS machine services. They also want to use flexibility file data with new AWS applications. However, many AWS Data Analytics services and applications are created to function with data stored in Amazon S3 as data lakes. After migration, they can use this work with the Amazon S3 as their data source. Previously, this required data pipe to copy data between Amazon FSX for Openzfs and Amazon S3 Kbely.
AMAZON S3 Access points connected to FSX for OPENSFS file systems eliminate data movement and copying requirements by unified access via file protocols and AMACON S3 API operations. You can read and write file data using S3 objects including Getobject, Putobject and Listersv2. You can connect the HUDREDS OF ACCESS DOS to the file system and each S3 access point configured with the application -specific permissions. These access points support the same granular permissions as S3 access points that connect to S3 buckets, including the AWS access points (IAM), block public access and control of network origin, such as accessing access to your virtual private cloud (VPC). Becuse Your Data Further residues in your FSX for Openzfs file system, you continue to access data using NFS (NFS) system and benefit from existing data management capabilities.
You can use file data in Amazon FSX for OpenzFS file systems to power generative AI applications with Amazon Bedrock for increased generation search (rag), train ML models with Amazon Sagemaker and operate analysts or business intelligence (BI) with Amazon Athena and Glue in S3 API. You can also look at the open source tools such as Apache Spark and Apache Hive without moving or refining data.
Start
You can create and connect the S3 access point to your Amazon FSX for the Openzfs file system using the Amazon FSX console, the AWS (AWS CLI) or AWS SDKs interface.
You want to start, you can follow the steps in the Amazon FSX page for Openzfs File Documentation File to create file system, and then use the Amazon FSX console Action and choose Create an access point S3. Leave the standard configuration and then create.
If you want to follow the progress of creating, you can switch to the Amazon FSX console.
Once available, it selects the name of the new S3 access point and check the access point summary. This summary included an automatic generated alias that works anywhere where you normally use the names of the S3 buckets.
With a bucket alias, you can access FSX data directly through S3 API operations.
- List of objects using API Listersv2
- Get files using API GETOBJECT API
- Write data using the Putobject API API
The data is still accessible via NFS.
In addition to accessing your FSX data via the S3 API, you can work with data using a wide range of services AI, ML and Analytics that work with data in S3. For example, I built the Amazon Bedrock knowledge base using PDFS containing customer service companies from my travel support Restitors, WhatsApp-Posed Rag Travel Support Agent: Increasing customer experience with PostgreSQL knowledge as data sources.
If you want to create a knowledge base of Amazon Bedrock, I watched the connection steps in the Amazon S3 connection for your user manual for the knowledge base. As a data source, I entered my Amazon S3 access point S3 aka Sour Source, then configured and created a knowledge base.
Once the knowledge base is synchronized, I see all the documents and Source Like S3.
Finlly I launched questions against the knowledge base and verified that it was successful, used file data from my Amazon FSX for the Openzfs file system to provide context responses and demonstrate trouble -free integration without moving data.
What to know
Integration and control of access – Amazon S3 access points for Amazon FSX for OpenSFS File Systems Standard S3 API operations (such as Getobjects, Listersv2, Putobject) via the S3 end point, with Granul Access Identity and Access Management (IAM) and system user verification. Your S3 access point contains an automatically generated alias access point to access data using S3 Kbely and public access is blocked by Amazon FSX sources by default.
Data management – Your data remains in your Amazon FSX for the Openzfs file system and at the same time become accessible as if it were in Amazon S3, eliminating the need for movement or copies of data, with file data accessible via NFS file protocols.
Performance -Mazon S3 Access Body for Amazon FSX for OPENSFS file systems supply the first bayte latency in a number of dozens of Millsconds, consisting of access to the S3 bucket. Performance is measured with the provision of AMAZON FSX files provided, with maximum permeability determined by configuring the FSX file configuration.
Prices – Amazon S3 charges you for the requirements and data transfer costs via S3 access point, except for the standard Amazon FSX load. More information about the Amazon FSX page for the OPENSFS price page.
Today, you can start using the Amazon FSX, AWS CLI or AWS SDKs to connect Amazon S3 access points to your Amazon FSX for OPENSFS file systems. This feature is available in the following AWS regions: US EAST (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt, Ireland, Stockholm) and Asia Pacific (Hong Kong, Singapore, Sydney, Tokyo).
– Eli