[
Abstract
Reconstruction of geometry based on different input modes, such as images or point clouds, has been instrumental in the development of computer aided design and computer graphics. Optimal implementations of these applications have traditionally involved the use of splinebased representations at their core. Most such methods attempt to solve optimization problems that minimize an outputtarget mismatch. However, these optimization techniques require an initialization that is close enough, as they are local methods by nature. We propose a deep learning architecture that adapts to perform spline fitting tasks accordingly, providing complementary results to the aforementioned traditional methods. We showcase the performance of our approach, by reconstructing spline curves and surfaces based on input images or point clouds.
{CCSXML}<ccs2012> <concept> <concept_id>10010147.10010371.10010396.10010399</concept_id> <concept_desc>Computing methodologies Parametric curve and surface models</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012>
\ccsdesc[500]Computing methodologies Parametric curve and surface models
\printccsdescDeepSpline: DataDriven Reconstruction of Parametric Curves and Surfaces] DeepSpline: DataDriven Reconstruction of Parametric Curves and Surfaces
]
Jun Gao^{†}^{†}thanks: , Chengcheng Tang, Vignesh GanapathiSubramanian, Jiahui Huang, Hao Su, Leonidas J. Guibas
University of Toronto; Vector Institute;
Tsinghua University; Stanford University; UC San Diego
1 Introduction
Threedimensional data, that is useful for everyday geometric representation and design, is essential in modern industry and research. Generating, approximating, processing and storing such data succinctly, and yet faithfully, are vital research problems with a long history of study. While it would be ideal to use threedimensional data in its most lossless form, the theoretical limit on representability as well as the source of data collection make the mode of representation crucial. For different applications, varied modes of 3D data representations are used, such as grids, points clouds, meshes and splines.
In the broad spectrum of 3D data representations, the most abstract forms are those based on splines and parametric primitives. Their natural representability for arbitrarily smooth surfaces makes them the industry standard for computeraided design. Therefore, while representations such as point clouds are more accessible, conversion to parametric representations is often necessary. Generating clean geometric data with parametric surfaces based on observations often requires approximation and fitting based on a distance measure. However, the requirement for a measurable distance between target and output and an initialization are common limitations for such a family of fitting algorithms.
On another front, creating geometry that could be induced and inferred from single images has historically been a very attractive research problem. A strikingly exciting direction is geometric inference from indirect inputs, especially single images. While such tasks, commonly known as Shape from X, have been studied for over four decades, the recent advances in deep learning have shed a new light on recovering these geometries. By synthetically generating observations based on groundtruth geometry, recovering geometry based on single images has been illustrated with the representation of voxel grids or point clouds over a wide variety of different shapes.
A natural question that then arises is if there is a way to infer the parametrizable surfaces, directly based on the input information. A deep learning framework is best suited to perform such inference for multiple reasons. Fundamentally, deep learningbased frameworks are adaptable to varied input formats, especially when the relationship between the data and geometry is highly nonlinear and nonconvex, e.g., images. Besides, by learning over a myriad of examples, deep networks also do away with the need for manual initialization of the control points, thereby making the inference process less laborious. Finally, the deep learning frameworks also aid in performing inference when the number of control points and curves are variable in number, which often needs to be determined heuristically in the traditional setting.
In this paper, we attempt to reconstruct spline curves and surfaces using datadriven approaches. To tackle challenges with the 2D cases such as multiple splines with intersections, we use a hierarchical Recurrent Neural Network (RNN) trained with ground truth labels, to predict a variable number of spline curves, each with an undetermined number of control points. In the 3D case, we reconstruct surfaces of revolution and extrusion without selfintersection through an unsupervised learning approach, that circumvents the requirement for ground truth labels.
To summarize, our contributions are as follows:

We define two singlelayer RNNs, Curve RNN and Point RNN that can be used to perform curve predictions and control point predictions respectively.

We provide a Hierarchical RNN architecture model that performs reconstruction of 2D spline curves of variable number, each of which contains a variable number of control points using nested Curve RNN and Point RNN units, and an algorithm to train it effectively.

We provide an unsupervised parametric reconstruction model that performs reconstruction of 3D surfaces of extrusion or revolution.
2 Related Work
The problem of recovering faithful yet succinct representations of geometry, has been studied extensively over the past decades with varied forms of outputs and inputs. There are three categories of works that have been relevant to this study and fundamental to multiple applications. We first discuss previous work on spline fitting, in which a direct minimization of a distance between the target and the result is performed. Next, we discuss the creation of shapes from indirect information, i.e., Shape from X, with current advances in deep learning. Finally, we also review the work done in vectorization of rasterized images and discuss the main differences.
Spline Fitting In computeraided design and computer graphics, registering or fitting curves or surfaces to targets, e.g., point clouds, are essential for a wide range of tasks such as industrial design following a physical sculpture. One of the most widely used method is the Iterative Closest Point (ICP) [BM92, CM91], which minimizes the distances between two clouds of points based on an initial configuration, iteratively evaluated. Following a similar goal of minimizing a directly measurable and differentiable distance, multiple variants of registration and fitting algorithms are proposed. By devising a metric that is adaptive to curvature, Wang et al proposed a faster and more robust algorithm [WPL06], which was further accelerated with quasiNewton methods [ZBLW12]This has further been extended to spline surfaces with constraints such as developability [TBWP16]. Despite being able to successfully and efficiently minimize the energies encoding distances, a proper initialization – a given number of points at selected positions – is always necessary for such approaches. In contrast, our method does not require such an initialization and could even be used to complement those previous approaches to improve their fits.
Shape from X There has been a long history of interest in discovering shapes from indirect inputs on images, e.g., from shading [Hor70, HB89] or texture [Ken79]. Most traditional methods follow a sequential procedure including, e.g., light source estimation, normal estimation, and depth estimation. More recently, multiple works generate shapes based on images in an endtoend manner with the help of deep neural networks and synthetically generated data, for shapes represented as volumetric grid [CXG16] or point clouds [FSG16]. Despite the variety and complexity of recoverable shapes from single images, a large gap remains between the reconstruction and a clean geometry. While multiscale approaches, e.g., based on octree [TDB17], attempt to enhance geometric details with higher resolution, we attempt to directly recover the parameters of geometric primitives, especially spline curves or surfaces directly, which provide a higher level geometric abstraction with an arbitrarily high resolution.
Image Tracing Another field of related work is image tracing or vectorization, in which a rasterized image is converted to a vectorized one. Commercially available tools, e.g., Illustrator, often provide a fine tessellation with an excessive amount of control points to ensure fidelity. While most vectorization techniques work on the boundaries of the shapes, recent works such as [FLB16] and [SSISI16] strive for a simplification of the output on curves in a globally consistent manner. Besides tracing based on direct differences of colors, PolygonRNN [CKUF17] used an RNN to predict the polygon contour of an object on a semantic level, which could be used for instance segmentation or reducing labelling labor. In contrast to these methods that perform vectorization, we aim to abstract the simplest types of representation, based on general splines instead of polylines or interpolating cubic Bezier curves, and to create 3D surfaces based on images.
3 Overview
3.1 Motivation
Traditional methods attempt to extract curved lines or surfaces from images leveraging lowlevel local image features such as gradients. For example, Canny Edge Detector [Can86] has been a popular edge detector in computer vision research for decades. Although lowlevel image features can work well for simple cases when the background is simple and clean, like handwriting on white clean papers, in images with clutter as in Figure 1, it is much more challenging to extract the profile of objects. It becomes vital then to develop a more robust method that can better exploit image content. As shown in recent computer vision papers for object recognition [KSH12], deep learning methods seem to be able to learn object categories agnostic to the nuisance factors such as lighting, background clutter, pose variation etc. We, therefore, resort to deep learning methods to detect and generate parametric curves for 2D/3D reconstruction. This serves as crucial motivation to solve this problem using learning based techniques.
3.2 Method
The splinebased reconstruction techniques presented in this paper are performed on multiple input modalities, specifically images and point clouds, and both 2D and 3D reconstruction are discussed in this paper (Figure 2). For 2D reconstruction from images, there are two stages, prediction of spline curves in the image to reconstruct, and prediction of the actual control points that reconstruct the aforementioned spline curve. In the paper, we address both these reconstructions as individual problems. We solve the prediction of identifying the spline curves, knowing each of the curves have a fixed number of control points, and the prediction of identifying the control points, given a single spline curve, but not knowing the number of control points that are used to generate this curve. The reconstruction techniques used for these individual problems are then utilized along with a hierarchical deep learning module called the Hierarchical RNN, to solve the more general problem of predicting both the unknown quantities.
For 3D reconstruction, we perform surface reconstruction in the case of an extruded crosssection or rotational symmetry with two input modes, images or point clouds. These image or point cloud data are processed to learn features, from which spline curves that generate the shape are learned. From the spline curve that is learned, a surface of extrusion or revolution is generated by extruding the curve along the path of extrusion or revolving the curve about the axis of symmetry.
4 Supervised 2D Reconstruction
Reconstructing spline curves in images consists of two modes of variability: the number of spline curves and the number of control points. While solving for both these factors of variability can prove challenging, solving the subproblems where one of these modes is fixed can provide ample intuition towards solving the harder problems with more variables. In this section, we provide basic models of spline curve fitting solving both these variability issues. First, we propose a model that infers a variable number of control points to fit a single spline curve. Then, we extend this to fitting multiple spline curves with a fixed number of control points. Finally, we propose the Hierarchical RNN to solve for multiple spline curves with a variable number of control points.
4.1 Single Spline Curve, Variable Number of Control Points
A vanilla model is used to tackle the problem of fitting a variable number of control points to a single spline curve. For the spline curve in consideration, the corresponding control points form an ordered sequence, with each element of the sequence being the position of one control point. The prediction of these variable number of control points could, therefore, be viewed as inferring a variable length sequence. A very similar learning technique has previously been used in Machine Translation and Image Captioning. [SVL14, BCB14, CVMG14, VTBE15]. The use of RNNs for this generative process is a natural choice.
The input to the pipeline is an image that contains the spline curve. A deep convolutional network is used to extract a feature vector from this image. This feature vector is then forwarded to an RNN module, which predicts the control point sequence. The RNN module performs a dual prediction task. At each iteration, the RNN predicts the position of a new control point and the probability with which this control point is the endpoint. The probability is predicted as a distribution over two states {CONTINUE = 0, STOP = 1}. Specifically, at time step , the model predicts the control point and the probability for the prediction to stop at this time step. In the ideal scenario, the value of would be binary, with value for all time steps, and for the final time step, forcing the prediction to end.
The network architecture is shown in Figure 3. Here, we use the VGGNet, described in [SZ14],to extract image features, and then perform mean pooling to obtain a vector representation of the whole image. This feature vector is fed into a linear layer to get a more abstract feature vector with 512 dimensions and then supplied into the RNN. We use a onelayer Gated Recurrent Unit (GRU) [CVMBB14] as the basic block of the RNN, with the dimension of input and hidden layers set to 512. At each time step, the hidden vector of RNN is fed forward into a twolayer fullyconnected network to produce the control point’s position and the stop probability . Specifically, the output of this twolayer fullyconnected network has four units, the first two represent and we append a softmax layer into the last two units, which provide a probability distribution over {CONTINUE = 0, STOP = 1}.
We use a MeanSquared Error (MSE) term to optimize on the position of the control points, and a CrossEntropy term to optimize for the predicted stop probability. The loss function used to train the RNN is
(1)  
(2) 
where is the size of the training dataset, is the number of control points of spline curve in the training dataset, is the predicted position of a control point, while is the corresponding ground truth, is the predicted stop probability, is the ground truth stop probability ( when , otherwise) and is the optimization hyperparameter.
4.2 Multiple Spline Curves, Fixed Number of Control Points
Fitting multiple spline curves with a fixed number of control points is solved with a minor modification to the model in Sec.4.1. Here, instead of predicting if a certain point is the final control point, the entire spline curve along with all control points are computed at each iteration of the RNN. The RNN also predicts the probability to determine if the most recent curve is the final curve.
However, there are two new challenges that arise in this solution. The RNN predicts an ordered sequence of curves, while the target of multiple spline curves is an unordered bag of spline curves. Therefore, a correspondence needs to be established between the target curves and the curves that are predicted by the network. This is achieved by modeling this problem as a matching problem in a bipartite graph, with the two sets being the set of target curves and the set of predicted curves. The weight of each edge between the curves in the two sets would be the distance between the two curves. The Hungarian Algorithm is implemented to obtain a matching of minimal cost. This is similar in spirit to the matching problem solved in [RT15].
The second challenge is to ensure that when a certain spline curve is being processed, influences from regions of other spline curves are minimized. Since there are multiple curves, occupying different regions of the image, it becomes necessary to nullify these influences. This is handled by adapting the attention mechanism [SSWF15, KIO16] of the network. At each time step , the image features are scaled by an attention map showing weights of different regions in the image before passing into the RNN. This ensures that the attention of the network is localized to the region in which the current spline dominates. The idea of using a localization network to perform this task of drawing attention to certain regions over others is used with considerable success in[XBK15]. Our methods are demonstrated in more detail in Sections 4.2.1 and 4.2.2.
4.2.1 Loss Function
The training dataset is composed of labeled images. Image as , its annotation is a set of spline curves, is the number of spline curves in this image, is a sequence of control point positions that construct spline curve inside the image . In all our experiments, we use .
The network predicts both a sequence of spline curves , and stop probabilities . We train the RNN by running iterations for each training instance . On inference, the number of iterations is determined by the predicted stop probability . The recurrence stops when .
The loss on the probability sequence is defined as earlier.
(3) 
(4) 
As described earlier, the loss term that measures the distance between the target and predicted spline curves also needs alignment. Since is an ordered sequence and the ground truth is an unordered set, the order of processing the spline curves in an image is not easily determinable. Random allocation of processing rank to will result in a problem of ambiguity and the network convergence is not guaranteed. The bipartite graph model is used here, as described earlier, to model the correspondence, while computing the reconstruction loss. This reconstruction loss is computed as follows:
(5) 
where
(6) 
(7)  
(8) 
is the Euclidean distance between the position of control points at and , and is the number of control points. we use in our experiments.
4.2.2 Attention Network
At each iteration of the RNN, a smaller network is used to predict a soft attention mask. This mask provides localized weights to different regions of the image features.
The output of the feature extractor is a 3D tensor , of size , where is the number of channels in the last layer of the network and is the downsampled size of input image. Each column, a vector that has elements, relates to a certain region in the input image. Let and the image feature be .
(11) 
At each time step of the RNN, we denote the input vector as and hidden vector as
(12) 
The attention network is used to get the vector , the hidden vector is used to predict the control points and stop probability.
(13)  
(14) 
is a twolayer fullyconnected network with ReLU activation. The inputs are , which corresponds to a certain part of the image, and , which contains the information of the curve that needs to be predicted at current step. The output is only a scalar which corresponds to the information about how important region is.
4.3 Multiple Spline Curves, Variable Number of Control Point
Extending the previous model to the generation process of multiple spline curves with variable numbers of control points would consist of two nested loops. First, we loop over the curves and determine the number of curves, thereby generating one curve at each iteration of the generative process. The inner loop uses the information from the first loop that is needed to generate one curve, looping over the control points and determine the number of control points to generate the curve. This is implemented through a hierarchical RNN structure that is explained below.
4.3.1 Hierarchical RNN
We propose a Hierarchical RNN structure to model this generation procedure. This architecture has been previously used to describe visual content by attempting to understand and caption details in local regions of an image [KJKFF17]. We leverage the multiple recurrent units of a network to repeatedly extract local regions of an image containing information, and then to process this information. As before, an image feature extractor and an attention subnetwork are used to process the input image before feeding it to the RNN. This model constitutes two RNNs combined hierarchically, one for looping over the curves (Curve RNN) and the other for looping over the control points (Point RNN). At each time step of the Curve RNN, it predicts the stop probability of the curve generation and also generates a vector representation of the current spline curve and forward it into the Point RNN. The Point RNN uses this vector representation of a spline curve and then decodes it into the position of control points. Our model is shown in Figure 4.
4.3.2 Curve RNN
The Curve RNN is a singlelayer GRU with hidden size , the initial hidden vector is predicted by a twolayer fullyconnected network with the average image feature as input. At each time step, the Curve RNN receives the image feature vector , after passing through the attention subnetwork, as input. The hidden vector is then fed into a twolayer fullyconnected network to obtain curve stop probability and a vector representation using which the model predicts the curve at the current step, which is also the input to the Point RNN. This is different from what is done in Sections 4.1 and 4.2, where the hidden vector is directly fed into the twolayer fullyconnected network to get positions of control points (or control point sequence).
4.3.3 Point RNN
The Point RNN is also a singlelayer GRU with hidden size , which, given a vector representation from Curve RNN, is used to generate the sequence of positions of control points. We follow the network configuration in Section 4.1. At each time step, the Point RNN predicts one control point combined with a stop probability that represents whether this control point is the end point.
4.3.4 Training and Inference
The two RNNs are trained accordingly to predict the spline curves, and their stop probabilities as suggested in previous sections. The pseudocode to train this Hierarchical RNN model is provided below in Algorithm 1.
The loss function has three terms: the meansquared error of predicted positions, the cross entropy loss of curve stop probabilities and of point stop probabilities:
(15)  
(16)  
(17) 
and are optimization hyperparameters. is the number of spline curves in the image, is the number of control points in the spline curve of image . is the target probability and is the predicted probability.
5 Unsupervised Parametric 3D Reconstruction
While the methods discussed in Section 4 could be employed to perform control point prediction when corresponding ground truth exists, solving for the parameters in the absence of ground truth is significantly harder.
Two natural obstacles are to be overcome in this setting. The first is to devise a loss function different from MeanSquared Error that can be used to optimize the neural network. This is because one needs a ground truth to make a pointwise comparison and compute the error. This ground truth is missing in this case, and a technique to compare against a target point cloud, must be devised. We assume we have the target point cloud, which is natural and necessary in traditional methods. The second obstacle is to make the network aware of its purpose, that is to predict spline curves (or surfaces), as opposed to some other arbitrary primitives. These obstacles are entangled twoway, with the loss function design needing to take into consideration properties of spline curves (or surfaces), which might provide feedback to the neural network to determine the parameters.
Both traditional optimization methods (ICP [BM92, CM91]) and ideas in [TSSP17], which utilizes neural network for fluid simulation, provide as inspirations to solving this problem. Suppose we are provided an oracle which could predict the parameters perfectly, then the point cloud representation of the predicted curves (or surfaces) could be generated. The Chamfer distance measurement between the point clouds of predicted curves (or surfaces) and the target point clouds could be used as the loss function to train the network.
Though we use the same loss function as in the case of traditional methods, we use a learning technique to predict the control points, as opposed to optimizing them directly, since this helps us leverage the entire dataset, as opposed to considering just one sample while optimizing. The prediction learned by the network could also used as the initialization to the traditional optimization methods, circumventing the manual initialization while reducing the number optimization iterations. A learning based approach also equips us to deal with multiple input formats, while traditional methods only use point cloud input.
5.1 Reconstruction of Images or Point Clouds
In this section, we provide a technique to reconstruct surfaces of extrusion or revolution from images or point clouds. For generating a surface of extrusion, we extrude a spline curve at a random height. As for a surface of revolution, this is generated by revolving the spline curve around the axis by 360 degrees. We only consider spline curves without selfintersections and with monotonically decreasing coordinate in the control point sequence. This assumption is made since generally selfintersected surfaces are not ubiquitous. Given the input data, features are extracted from it. If this is a 2D image, the VGG network [SZ14] is used to extract the image features as in Section 4.1. If the input is a point cloud, the PointNet [QSMG16] is used to extract features. The extracted features v are forwarded into a twolayer fullyconnected network, which predicts the position of control points C (if surfaces of extrusion are considered, C also contains the extruded height). Since C contains the control points, the predicted curve can be reconstructed from a linear combination of C, which is dependent on the parameterization of the spline curve t. This is represented as follows.
(18) 
(19) 
where denotes the twolayer fullyconnected network, and f is the basis function dependent on the spline generator t.
We use the Chamfer distance to measure the distance between the predicted point cloud and target point cloud.
(20) 
Since , is differentiable, it is possible to train the network endtoend. The function represents control point weights to generate the surface. This is implemented as a linear layer with fixed weights in the network. This architecture is generalizable, since predicting other kinds of surfaces (like surfaces of sweeping or NURBS), would require only a change of this individual layer, with the rest of the model remaining the same.
6 Experiments
Training deep neural networks usually requires a copius amount of data. However, since there isn’t enough real data with ground truth spline curve labeling, we use randomly synthesised data to perform our training and testing experiments. We synthesize a dataset of size 500,000, with a 7030 traintotest split. For each instance of the dataset, multiple spline curves and control points for each of the spline curves are generated. The training dataset consists of selfintersecting or looping spline curves, but do not contain any closed spline curves. This could be done by adding a circular loss term to the loss function in Equation 1. But for the purpose of this paper, we shall not venture into closed curves. According to the problem we attempt to solve, we generate the dataset and perform the training. We also run the trained model on real images to check the generalizability of our model.
For 2D reconstruction, we set the image size to 128128 and randomly generate the number of spline curves and the number and the position of control points for each spline curve in 2D image plane. The number of control points are varied between 4 to 6 for the problem with spline curve reconstruction with variable control points. The number of spline curve are varied between 1 to 3 for the problem of reconstructing multiple spline curve with the number of control points fixed at 5. These two variations are combined to train and test for multiple spline curve reconstruction with variable number of control points. For 3D reconstrunction, the number of control points are fixed to 5, and then the spline curve is revolved or extruded to gain a 3D surfaces. If the input is an image, we render the 3D surface with fixed camera angle and lighting condition. If the input is point cloud, we randomly sample from the 3D surface.
We use the VGGNet as image feature extractor for all the three models in Sec. 4. We initialize our network using pretrained weights from ImageNet [DDS09]. We then train the network with Adam optimizer through backpropagation. We set both the learning rate and weight decay to . Determining the number of points (or curves) is easier than predicting the position of the control points. Therefore, the hyperparameter is set to 0.1 to lay more emphasis on position regression. Our models are implemented in PyTorch and trained for 10010000 iterations with batch size set to 32. It took approximately one to two days on GeForce GTX 1080 GPU to converge. Testing on one image needs only 12ms.
Measuring the performance of our model is not a straightforward task. Due the ambiguity that could potentially be caused by two shapesimilar curves having totally different control point sequences, or even different number of control points, one single measurement would not be able to cover all the characteristics of spline curves. We use three measurements here that complement each other: MeanSquared Error between the position of predicted control points and target control points, Classification accuracy of the number of control points and the number of spline curves and Chamfer distance between point clouds of the predicted spline curves and the target curves.
6.1 Baseline Method
Traditional methods minimize a distance metric measured directly between a pair of geometric entities: the reconstructed surface and a target often provided as a point cloud. In each iteration, the target points are projected onto the predicted surface (and vice versa) to obtain pointtofootpoint matches. As this distance can be expressed through the parameters of the prediction, minimizing the distance updates the parameters. Throughout the optimization, at each iteration, the distance and the footpoints are reevaluated. Faster algorithms based on normaltonormal distances and pointtotangentplane distances have been used to accelerated the process, with an essentially similar idea of minimizing a local distance metric [BM92, CM91, WPL06, TBWP16]. For all our experiments, we minimize the pointtopoint distance as the traditional method to compare against.
6.2 Results
We present results pertaining to both 2D spline curve reconstruction and 3D surface of extrusion or revolution reconstruction. As mentioned in Section 4 and 5, the method that is employed to perform the 2D spline reconstruction is supervised, while the one used to reconstruct the surface of extrusions or revolutions is unsupervised.
6.2.1 Supervised 2D Reconstruction
We have proposed three supervised models in Sections 4.1, 4.2 and 4.3 aiming to reconstruct single spline curve with variable control points, variable number of spline curves with a fixed number of control points and variable number of spline curves with variable control points respectively. In this section, we showcase the reconstruction performance of these individual models, as well as in comparison to traditional energybased models.
Single Spline Curve, Variable Number of Control Points.
We attempt to show that traditional energybased models that optimize to reconstruct a spline curve with fixed number of control points as input, show significantly improved performance when initialized with control points that are more systematically learned by our method, as opposed to a random initialization that is classically performed. To this effect, we randomly initialize the position of control points and use energybased methods to perform the optimization. Since the traditional method uses a fixed number of input control points, we perform this optimization for three different cases of initial control points () and select the best optimized case as the final result. We observe that the performance of the learned initialization easily outperforms the random initialization. This is expected, since the systematic learning of the control points leverages the information contained in the training datasets about the relative position of the control points to the curves, and this can provide us with a very good starting point to perform the optimization from, This is observed in Figure 5. It is observed that the prediction by our RNN is close enough to the original curve, and therefore serves as an excellent initialization for control point optimization. After optimization based on this initialization, the obtained reconstruction almost exactly mirrors the input curves.
Variable Spline Curves, Fixed Number of Control Points.
The method discussed in Section 4.2 is used to perform predictions of number of spline curves and the control points for each of the spline curves. In Figure 6, we again perform the comparisons between the target and the predictions of the RNN, the optimizationbased reconstruction dependent on initializations, both random and based on RNN output. We again observe that the prediction of the RNN, when used as an initialization, comfortably outperforms the random initialization.
Variable Spline Curves, Variable Number of Control Points
The method discussed in Section 4.3 is used to perform predictions of a variable number of spline curves, each containing a variable number of control points. We perform the comparisons between the target and the random and RNNbased initialization and reconstruction schemes again in Figure 7. It is also to be noted that in Figure 7, as in the case of the orange curve in the second row, in spite of prediction of number of control points being wrong and predicted positions also being wrong, the predicted and target curves look similiar, which provide us with the intuition that the RNN architecture learns the curves though it does not necessarily learn it in the specific manner we intend it to.
The optimization results after the RNNbased initialization are an order of magnitude better than the optimization results after a random initialization. This can be quantitatively validated over the entire test set, that has been synthetically generated with random positions of control points and variability in the number of spline curves and control points. This evaluation is indicated in Table 1 as well, where V refers to the case of using variable number of control points on single spline curve, M refers to the case of using multiple spline curves with fixed number of control points and MV refers to the case of using multiple spline curves each containing a variable number of control points.
We also show other quantitative measures in Table 2. Here, we compute the MeanSquared Error in curve prediction and accuracy in computing number of points and number of curves, as well as the Chamfer distance between the reconstructed prediction and the target curves. It is to be noted that in the case of variable curves with fixed control points, the point accuracy is not an applicable measure since it is known beforehand, and the same holds for curve accuracy in the case of variable control points for single curve. It is observed that in spite of the fact that the performance of curve accuracy and point accuracy both drop in the case of multiple spline curves with variable control points, the drop is very minor and the performance is still excellent by all comparable measures as shown in Table 1.
NN  NN Init  Random Init  

V  
M  
MV 
MSE  Point Acc  Curve Acc  Chamfer Distance  

V  0.01302  94.58  N/A  
M  0.02699  N/A  99.85  
MV  0.03738  82.75  99.49 
6.2.2 Point Cloud Reconstruction
The reconstruction of surfaces of extrusion or revolution from inputs in the form of point clouds is performed as described in Section 5.1. A sampling of these reconstructions is shown in Figure 8, where spline curve prediction to resemble the point cloud when revolved or extruded is performed in an unsupervised manner.
6.2.3 Real/Synthetic Image Reconstruction
MNIST Spline Reconstruction. We use the trained networks to reconstruct images in the MNIST dataset [LBBH98]. The networks have been trained on synthetically generated data, and testing it to perform real data reconstruction, as is the case of the MNIST dataset, is overreaching of the capabilities of the network. Nevertheless, it is to be noted that we perform significantly well on this dataset, especially when the input images are more curved in nature, with less sharp edges. A sample of these reconstructed results are shown in Figure 9.
We first enlarge the MNIST image to size , and then thin the digits to lines, which are the input to our network. It is observed that the mode of multiple spline curves with fixed number of control points performs best for this task. It is also to be noted that no postprocessing is performed on the result.
Surface Reconstruction. We perform testing on real images to generate surfaces of revolution. As a preprocessing step, we convert the photo to a grayscale image, crop and pad it properly to fit the input size of the neural network. An instance of this reconstruction, performed as described in Section 5.1 on a synthetic input image, is shown in Figure 10. This can also be performed on real images as illustrated in Figure 11.
6.2.4 Visualization and Analysis
The attention map in the Hierarchical RNN is plotted at different curveprediction iterations in Figure 12. This gives us a method to visualize how the networks learns to pick the next curve to predict. In each iteration, the network tends to focus on the regions of the image that correspond to the curve that is being predicted currently. The attention region in these maps is usually close to the center of the curve. Since the operation of convolution is repeated applied, as the network gets deeper, the center part of an object or curve tends to contain more information than the marginal parts, and thus this part deserves more attention.
6.2.5 Failure Cases
We showcase a number of failure cases for our method in Figure 13. It is observed that when multiple spline curves are heavily entangled, the model seems to fail. These entanglements are difficult to separate through manual human supervision, and so it can be expected with fairly high chance that this would fail, and this is what is observed in the figure.
We also showcase some failure cases in the reconstruction of MNIST numerals in Figure 14. This can be attributed to the lack of training on real images. There are multiple issues with using the MNIST images as test images. One issue that we run into often is that of closed curves. The training data does not contain closed spline curves, but in MNIST closed curves are ubiquitous, such as numerals 0,6,8,9 in the Figure 14. This tends to be alien to the network when it is seen in the MNIST images. Another issue is that images such as numerals, which are sharper in nature, than plain spline curves, and thus the curves that are predicted, while they attempt to approximate the input image, they are not expressive enough to be able to make a close enough approximation. Real images also contain some noise, such as unnecessary pixels, which might lead the network to predict complicated curves to account for the noise (numerals 1, 2, 5, 7 in the Figure 14). But we believe these issues could be mitigated with some domain adaptation techniques.
There are a few limitations to our methods. For 2D reconstruction, our methods only consider the family of spline curves where the knot position in the splines is fixed. If both the control points and the knots need to vary, our methods need modifications. For 3D reconstruction, our methods only consider surfaces with no selfintersections and decreasing ycoordinates to avoid the local minima problem. Finally, due to lack of training data, we train our model on synthetic data and test it on real data. While we do this, we do not apply domain adaptation techniques to help the network generalize well to real data, but this is a problem which could be attempted as a future work.
7 Conclusion and Future Work
In this paper, we have illustrated approaches to reconstruct spline curves and surfaces using data driven approaches. Being both different and complementary to the traditional methods of spline fitting, our methods, adaptive to different forms of inputs, do not need initialization and can handle variable number of control points. There are many exciting directions that remain to be explored. A viable future direction would be to investigate methods to detect, decompose, and recover mutiple parametric surfaces from single images and consistently assemble them across multiple images. Another area of interest would be to study how other types of information, such as semantics and physics, can be utilized to design and reconstruct clean, complex, and functional geometry and structures.
References
 [BCB14] Bahdanau D., Cho K., Bengio Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014).
 [BM92] Besl P. J., McKay N. D., et al.: A method for registration of 3d shapes. IEEE Transactions on pattern analysis and machine intelligence 14, 2 (1992), 239–256.
 [Can86] Canny J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8, 6 (June 1986), 679–698.
 [CKUF17] Castrejón L., Kundu K., Urtasun R., Fidler S.: Annotating object instances with a polygonrnn. In CVPR (2017).
 [CM91] Chen Y., Medioni G.: Object modeling by registration of multiple range images. In Robotics and Automation, 1991. Proceedings., 1991 IEEE International Conference on (1991), IEEE, pp. 2724–2729.
 [CVMBB14] Cho K., Van Merriënboer B., Bahdanau D., Bengio Y.: On the properties of neural machine translation: Encoderdecoder approaches. arXiv preprint arXiv:1409.1259 (2014).
 [CVMG14] Cho K., Van Merriënboer B., Gulcehre C., Bahdanau D., Bougares F., Schwenk H., Bengio Y.: Learning phrase representations using rnn encoderdecoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014).
 [CXG16] Choy C. B., Xu D., Gwak J., Chen K., Savarese S.: 3dr2n2: A unified approach for single and multiview 3d object reconstruction. In European Conference on Computer Vision (2016), Springer, pp. 628–644.
 [DDS09] Deng J., Dong W., Socher R., Li L.J., Li K., FeiFei L.: Imagenet: A largescale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on (2009), IEEE, pp. 248–255.
 [FLB16] Favreau J.D., Lafarge F., Bousseau A.: Fidelity vs. simplicity: a global approach to line drawing vectorization. ACM Transactions on Graphics (TOG) 35, 4 (2016), 120.
 [FSG16] Fan H., Su H., Guibas L.: A point set generation network for 3d object reconstruction from a single image. arXiv preprint arXiv:1612.00603 (2016).
 [HB89] Horn B. K., Brooks M. J.: Shape from shading. MIT press, 1989.
 [Hor70] Horn B. K.: Shape from shading: A method for obtaining the shape of a smooth opaque object from one view.
 [Ken79] Kender J. R.: Shape from texture: An aggregation transform that maps a class of textures into surface orientation. In Proceedings of the 6th international joint conference on Artificial intelligenceVolume 1 (1979), Morgan Kaufmann Publishers Inc., pp. 475–480.
 [KIO16] Kumar A., Irsoy O., Ondruska P., Iyyer M., Bradbury J., Gulrajani I., Zhong V., Paulus R., Socher R.: Ask me anything: dynamic memory networks for natural language processing. international conference on machine learning (2016), 1378–1387.
 [KJKFF17] Krause J., Johnson J., Krishna R., FeiFei L.: A hierarchical approach for generating descriptive image paragraphs. In Computer Vision and Patterm Recognition (CVPR) (2017).
 [KSH12] Krizhevsky A., Sutskever I., Hinton G. E.: Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, Pereira F., Burges C. J. C., Bottou L., Weinberger K. Q., (Eds.). Curran Associates, Inc., 2012, pp. 1097–1105.
 [LBBH98] LeCun Y., Bottou L., Bengio Y., Haffner P.: Gradientbased learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998), 2278–2324.
 [QSMG16] Qi C. R., Su H., Mo K., Guibas L. J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. arXiv preprint arXiv:1612.00593 (2016).
 [RT15] RomeraParedes B., Torr P. H. S.: Recurrent instance segmentation. CoRR abs/1511.08250 (2015).
 [SSISI16] SimoSerra E., Iizuka S., Sasaki K., Ishikawa H.: Learning to simplify: fully convolutional networks for rough sketch cleanup. ACM Transactions on Graphics (TOG) 35, 4 (2016), 121.
 [SSWF15] Sukhbaatar S., Szlam A., Weston J., Fergus R. D.: Endtoend memory networks. neural information processing systems (2015), 2440–2448.
 [SVL14] Sutskever I., Vinyals O., Le Q. V.: Sequence to sequence learning with neural networks. In Advances in neural information processing systems (2014), pp. 3104–3112.
 [SZ14] Simonyan K., Zisserman A.: Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556 (2014).
 [TBWP16] Tang C., Bo P., Wallner J., Pottmann H.: Interactive design of developable surfaces. ACM Transactions on Graphics (TOG) 35, 2 (2016), 12.
 [TDB17] Tatarchenko M., Dosovitskiy A., Brox T.: Octree generating networks: Efficient convolutional architectures for highresolution 3d outputs. In IEEE International Conference on Computer Vision (ICCV) (2017).
 [TSSP17] Tompson J., Schlachter K., Sprechmann P., Perlin K.: Accelerating Eulerian fluid simulation with convolutional networks. In Proceedings of the 34th International Conference on Machine Learning (International Convention Centre, Sydney, Australia, 06–11 Aug 2017), Precup D., Teh Y. W., (Eds.), vol. 70 of Proceedings of Machine Learning Research, pp. 3424–3433.
 [VTBE15] Vinyals O., Toshev A., Bengio S., Erhan D.: Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition (2015), pp. 3156–3164.
 [WPL06] Wang W., Pottmann H., Liu Y.: Fitting bspline curves to point clouds by curvaturebased squared distance minimization. ACM Transactions on Graphics (ToG) 25, 2 (2006), 214–238.
 [XBK15] Xu K., Ba J., Kiros R., Cho K., Courville A., Salakhudinov R., Zemel R., Bengio Y.: Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning (ICML15) (2015), Blei D., Bach F., (Eds.), JMLR Workshop and Conference Proceedings, pp. 2048–2057.
 [ZBLW12] Zheng W., Bo P., Liu Y., Wang W.: Fast bspline curve fitting by lbfgs. Computer Aided Geometric Design 29, 7 (2012), 448–462.