With the development of edge computing, the edge data center of optical interconnection has attracted extensive attention due to its storage, computing, and large bandwidth capabilities. Caching popular content in the edge data center has been proposed to reduce network load and latency. Due to the limited capability of a single edge data center, multiple edge data centers are required to cooperate to meet specific business requirements. Data in the edge computing optical network is distributed according to area deployment. Coordination between data centers will cause resource waste and delay due to the asynchronous transmission time. We designed and implemented a latency-controlled distributed content caching and data aggregation experiment in the MEC-empowered metro optical networks. The system can not only realize dynamic network configuration and service deployment but also reduce the average delay and improve the resources utilization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.