Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Arxiv, 2020
In this paper, we propose a deep learning-based method, deep Euler method (DEM) to solve ordinary differential equations. DEM significantly improves the accuracy of the Euler method by approximating the local truncation error with deep neural networks which could obtain a high precision solution with a large step size. The deep neural network in DEM is mesh-free during training and shows good generalization in unmeasured regions. DEM could be easily combined with other schemes of numerical methods, such as Runge-Kutta method to obtain better solutions. Furthermore, the error bound and stability of DEM is discussed.
Download here
Published in CVPR2021(oral), 2021
Binary grid mask representation is broadly used in instance segmentation. A representative instantiation is Mask R-CNN which predicts masks on a 28* 28 binary grid. Generally, a low-resolution grid is not sufficient to capture the details, while a high-resolution grid dramatically increases the training complexity. In this paper, we propose a new mask representation by applying the discrete cosine transform (DCT) to encode the high-resolution binary grid mask into a compact vector. Our method, termed DCT-Mask, could be easily integrated into most pixel-based instance segmentation methods. Without any bells and whistles, DCT-Mask yields significant gains on different frameworks, backbones, datasets, and training schedules. It does not require any pre-processing or pre-training, and almost no harm to the running speed. Especially, for higher-quality annotations and more complex backbones, our method has a greater improvement. Moreover, we analyze the performance of our method from the perspective of the quality of mask representation. The main reason why DCT-Mask works well is that it obtains a high-quality mask representation with low complexity.
Download here
Published in Journal of Nonlinear & Variational Analysis, 2022
In this paper, we propose a deep neural network-based numerical method for solving contact problems. Focusing on a static frictionless unilateral contact problem, we derive its weak formulation and prove that the solution of the weak formulation is also the minimizer of the corresponding energy functional. By converting the original contact problem into a minimization problem, a deep neural network is adopted to approximate the solution and solve the minimization problem. Numerical results demonstrate the effectiveness and accuracy of our method.
Download here
Published in Nonlinear Analysis: Real World Applications, 2023
In this paper, we propose a method based on deep neural networks to solve obstacle problems. By introducing penalty terms, we reformulate the obstacle problem as a minimization optimization problem and utilize a deep neural network to approximate its solution. The convergence analysis is established by decomposing the error into three parts: approximation error, statistical error and optimization error. The approximate error is bounded by the depth and width of the network, the statistical error is estimated by the number of samples, and the optimization error is reflected in the empirical loss term. Due to its unsupervised and meshless advantages, the proposed method has wide applicability. Numerical experiments illustrate the effectiveness and robustness of the proposed method and verify the theoretical proof.
Download here
Published in Siggraph 2024 Conference Paper, 2024
The linear conjugate gradient method is widely used in physical simulation, particularly for solving large-scale linear systems derived from Newton’s method. The nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization, which is extensively utilized in solving practical large-scale unconstrained optimization problems. However, it is rarely discussed in physical simulation due to the requirement of multiple vector-vector dot products. Fortunately, with the advancement of GPU-parallel acceleration techniques, it is no longer a bottleneck. In this paper, we propose a Jacobi preconditioned nonlinear conjugate gradient method for elastic deformation using interior-point methods. Our method is straightforward, GPU-parallelizable, and exhibits fast convergence and robustness against large time steps. The employment of the barrier function in interior-point methods necessitates continuous collision detection per iteration to obtain a penetration-free step size, which is computationally expensive and challenging to parallelize on GPUs. To address this issue, we introduce a line search strategy that deduces an appropriate step size in a single pass, eliminating the need for additional collision detection. Furthermore, we simplify and accelerate the computations of Jacobi preconditioning and Hessian-vector product for hyperelasticity and barrier function. Our method can accurately simulate objects comprising over 100,000 tetrahedra in complex self-collision scenarios at real-time speeds.
Download here
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, ZJU, 2019