As a gradient-guided search method, differentiable architecture search greatly reduces computational costs and improves search speed compared with traditional reinforcement learning methods and evolutionary methods that search for network structures in discrete spaces. However, when the number of search epochs is too large, the searched architecture will contain a lot of skip connections, resulting in a sharp decline in network performance. Aiming at this phenomenon, this paper designs a phased search process. With the deepening of the search stage, different early stopping rules are designed, so that the units located in different positions of the network can present different structures, effectively solving the performance crash problem caused by skip connections. At the same time, the design of edge normalization is introduced on the connection between nodes, and the difference between the weight parameters of different operations is enlarged by improving the loss function, which effectively improves the stability of the architecture search. The experimental results show that this method trades off a small amount of parameters and search time in exchange for an improvement in accuracy, and the verification accuracy on the cifar10 and cifar100 datasets has increased by 0.15% and 1.36%, respectively.
Multi label classification of judicial text data is a hot issue in judicial artificial intelligence. However, judicial data has the characteristics of strong professionalism and long text. The performance of multi label classification task of judicial text using BERT model is poor. Aiming at the above problems, this paper proposes a BERT-TextCNN model, which uses BERT to extract text vectors, and introduces TextCNN to construct multi label classifiers for training, so as to extract semantic information features at different levels of abstraction and improve classification precision. In addition, the input character limit of BERT model will affect the classification results, so this paper rebuild the data set to make the data set meet the maximum input limit of BERT. This paper is tested on the multi label classification data set in the " China legal research Cup " in 2021. The experimental results show that compared with the BERT model, the proposed method has significantly improved the performance and can effectively improve the effect of multi label classification of Chinese judicial texts.
KEYWORDS: Field programmable gate arrays, Data processing, Computing systems, Data centers, Raster graphics, Control systems, Data transmission, Convolution, Telecommunications
The ever-increasing amount of data has brought severe pressure to the data center. Facing the large-scale data processing requirements, the data center not only needs to increase the data bandwidth, but also needs to ensure the timeliness of data processing. It is increasingly unable to meet the processing requirements of high throughput and low latency. This topic adopts the stream processing architecture of heterogeneous collaborative computing based on FPGA and CPU, designs a dual-channel separated stream processing system based on FPGA network offload processing, improves the management and interaction capabilities of FPGA, reduces processing delay, and provides a set of manageable The general data real-time processing platform.
Characteristics of Geosynchronous Synthetic Aperture Radar (GEO SAR), including ground velocity, integration time, azimuth resolution and other factors, are investigated. And the relationship between them and orbital elements is analyzed. Then a subaperture mode based on yawing steering which is more suitable for GEO SAR is proposed. By applying this mode reduction of range migration is obvious, and system parameters of L-band GEO SAR are designed,
including the antenna size, the pulse width, the peak transmitted power, and pulse repetition frequency. Finally a modified Range Doppler (RD) algorithm based on quintic polynomial and a Chirp Scaling (CS) algorithm based on spectrum mosaic are advanced. Point target simulation is implemented and the azimuth resolution is assessed according to the rule of being able to distinguish adjacent targets. It is demonstrated that the proposed image formation algorithms can provide a spatial resolution better than 15m.
This paper presents a novel algorithm based on curve surface mosaic, to create a full view spherical panorama from
image sequences. The work is concentrated on sphere projection, blank holes elimination, global illumination alignment
and curve patch stitching. When in projection, a special longitude-latitude-curve-patch is proposed to describe the
projective image to avoid info losing and wrinkle unwrapping that occurred in some traditional methods. Then a way of
"inverse-interpolation" is applied to eliminate projective blank holes caused by discrete calculation. To achieve global
illumination alignment for patches with great illumination differences, a novel method of "dispersing cumulative error"
is presented. It overcomes the shortcoming of traditional ways that are only for neighboring illumination alignment. The
final stitching of curve patches is accomplished by using a matching method based on image feature, and a smooth
seamless spherical panorama is gained. The whole algorithm runs automatically, which has high performance in
illumination alignment and spherical mosaic. It is valuable in practical application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.