What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service


arXiv feed: `testing`

arXiv feed for the query `testing`


Precipitate dissolution during deformation induced twin thickening in a CoNi-base superalloy subject to creep

Permalink - Posted on 2021-11-24 18:57

The tensile creep performance of a polycrystalline Co/Ni-base superalloy with a multimodal gamma prime distribution has been examined at 800C and 300MPa. The rupture life of the alloy is comparable to that of RR1000 tested under similar conditions. Microstructural examination of the alloy after testing revealed the presence of continuous gamma prime precipitates and M23C6 carbides along the grain boundaries. Intragranularly, coarsening of the secondary gamma prime precipitates occurred at the expense of the fine tertiary gamma prime. Long planar deformation bands, free of gamma prime, were also observed to traverse individual grains ending in steps at the grain boundaries. Examination of the deformation bands confirmed that they were microtwins. Long sections of the microtwins examined were depleted of gamma prime stabilising elements across their entire width, suggesting that certain alloy compositions are susceptible to precipitate dissolution during twin thickening. A mechanism for the dissolution of the precipitates is suggested based on the Kolbe reordering mechanism.

Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion

Permalink - Posted on 2021-11-24 18:56

Chamfer Distance (CD) and Earth Mover's Distance (EMD) are two broadly adopted metrics for measuring the similarity between two point sets. However, CD is usually insensitive to mismatched local density, and EMD is usually dominated by global distribution while overlooks the fidelity of detailed structures. Besides, their unbounded value range induces a heavy influence from the outliers. These defects prevent them from providing a consistent evaluation. To tackle these problems, we propose a new similarity measure named Density-aware Chamfer Distance (DCD). It is derived from CD and benefits from several desirable properties: 1) it can detect disparity of density distributions and is thus a more intensive measure of similarity compared to CD; 2) it is stricter with detailed structures and significantly more computationally efficient than EMD; 3) the bounded value range encourages a more stable and reasonable evaluation over the whole test set. We adopt DCD to evaluate the point cloud completion task, where experimental results show that DCD pays attention to both the overall structure and local geometric details and provides a more reliable evaluation even when CD and EMD contradict each other. We can also use DCD as the training loss, which outperforms the same model trained with CD loss on all three metrics. In addition, we propose a novel point discriminator module that estimates the priority for another guided down-sampling step, and it achieves noticeable improvements under DCD together with competitive results for both CD and EMD. We hope our work could pave the way for a more comprehensive and practical point cloud similarity evaluation. Our code will be available at: https://github.com/wutong16/Density_aware_Chamfer_Distance .

Automatic Mapping with Obstacle Identification for Indoor Human Mobility Assessment

Permalink - Posted on 2021-11-24 18:36

We propose a framework that allows a mobile robot to build a map of an indoor scenario, identifying and highlighting objects that may be considered a hindrance to people with limited mobility. The map is built by combining recent developments in monocular SLAM with information from inertial sensors of the robot platform, resulting in a metric point cloud that can be further processed to obtain a mesh. The images from the monocular camera are simultaneously analyzed with an object recognition neural network, tuned to detect a particular class of targets. This information is then processed and incorporated on the metric map, resulting in a detailed survey of the locations and bounding volumes of the objects of interest. The result can be used to inform policy makers and users with limited mobility of the hazards present in a particular indoor location. Our initial tests were performed using a micro-UAV and will be extended to other robotic platforms.

iCompare: A Package for Automated Comparison of Solar System Integrators

Permalink - Posted on 2021-11-24 18:05

We present a tool for the comparison and validation of the integration packages suitable for Solar System dynamics. iCompare, written in Python, compares the ephemeris prediction accuracy of a suite of commonly-used integration packages (JPL/HORIZONS, OpenOrb, OrbFit at present). It integrates a set of test particles with orbits picked to explore both usual and unusual regions in Solar System phase space and compares the computed to reference ephemerides. The results are visualized in an intuitive dashboard. This allows for the assessment of integrator suitability as a function of population, as well as monitoring their performance from version to version (a capability needed for the Rubin Observatory's software pipeline construction efforts). We provide the code on GitHub with a readily runnable version in Binder (https://github.com/dirac-institute/iCompare).

Analysing Statistical methods for Automatic Detection of Image Forgery

Permalink - Posted on 2021-11-24 17:48

Image manipulation and forgery detection have been a topic of research for more than a decade now. New-age tools and large-scale social platforms have given space for manipulated media to thrive. These media can be potentially dangerous and thus innumerable methods have been designed and tested to prove their robustness in detecting forgery. However, the results reported by state-of-the-art systems indicate that supervised approaches achieve almost perfect performance but only with particular datasets. In this work, we analyze the issue of out-of-distribution generalisability of the current state-of-the-art image forgery detection techniques through several experiments. Our study focuses on models that utilise handcrafted features for image forgery detection. We show that the developed methods fail to perform well on cross-dataset evaluations and in-the-wild manipulated media. As a consequence, a question is raised about the current evaluation and overestimated performance of the systems under consideration. Note: This work was done during a summer research internship at ITMR Lab, IIIT-Allahabad under the supervision of Prof. Anupam Agarwal.

Inter-pad distances of irradiated FBK Low Gain Avalanche Detectors

Permalink - Posted on 2021-11-24 17:36

Low Gain Avalanche Detectors (LGADs) are a type of thin silicon detector with a highly doped gain layer. LGADs manufactured by Fondazione Bruno Kessler (FBK) were tested before and after irradiation with neutrons. In this study, the Inter-pad distances (IPDs), defined as the width of the distances between pads, were measured with a TCT laser system. The response of the laser was tuned using $\beta$-particles from a 90Sr source. These insensitive "dead zones" are created by a protection structure to avoid breakdown, the Junction Termination Extension (JTE), which separates the pads. The effect of neutron radiation damage at \fluence{1.5}{15}, and \fluence{2.5}{15} on IPDs was studied. These distances are compared to the nominal distances given from the vendor, it was found that the higher fluence corresponds to a better matching of the nominal IPD.

EAD: an ensemble approach to detect adversarial examples from the hidden features of deep neural networks

Permalink - Posted on 2021-11-24 17:05

One of the key challenges in Deep Learning is the definition of effective strategies for the detection of adversarial examples. To this end, we propose a novel approach named Ensemble Adversarial Detector (EAD) for the identification of adversarial examples, in a standard multiclass classification scenario. EAD combines multiple detectors that exploit distinct properties of the input instances in the internal representation of a pre-trained Deep Neural Network (DNN). Specifically, EAD integrates the state-of-the-art detectors based on Mahalanobis distance and on Local Intrinsic Dimensionality (LID) with a newly introduced method based on One-class Support Vector Machines (OSVMs). Although all constituting methods assume that the greater the distance of a test instance from the set of correctly classified training instances, the higher its probability to be an adversarial example, they differ in the way such distance is computed. In order to exploit the effectiveness of the different methods in capturing distinct properties of data distributions and, accordingly, efficiently tackle the trade-off between generalization and overfitting, EAD employs detector-specific distance scores as features of a logistic regression classifier, after independent hyperparameters optimization. We evaluated the EAD approach on distinct datasets (CIFAR-10, CIFAR-100 and SVHN) and models (ResNet and DenseNet) and with regard to four adversarial attacks (FGSM, BIM, DeepFool and CW), also by comparing with competing approaches. Overall, we show that EAD achieves the best AUROC and AUPR in the large majority of the settings and comparable performance in the others. The improvement over the state-of-the-art, and the possibility to easily extend EAD to include any arbitrary set of detectors, pave the way to a widespread adoption of ensemble approaches in the broad field of adversarial example detection.

WFDefProxy: Modularly Implementing and Empirically Evaluating Website Fingerprinting Defenses

Permalink - Posted on 2021-11-24 16:56

Tor, an onion-routing anonymity network, has been shown to be vulnerable to Website Fingerprinting (WF), which de-anonymizes web browsing by analyzing the unique characteristics of the encrypted network traffic. Although many defenses have been proposed, few have been implemented and tested in the real world; others were only simulated. Due to its synthetic nature, simulation may fail to capture the real performance of these defenses. To figure out how these defenses perform in the real world, we propose WFDefProxy, a general platform for WF defense implementation on Tor using pluggable transports. We create the first full implementation of three WF defenses: FRONT, Tamaraw and Random-WT. We evaluate each defense in both simulation and implementation to compare their results, and we find that simulation correctly captures the strength of each defense against attacks. In addition, we confirm that Random-WT is not effective in both simulation and implementation, reducing the strongest attacker's accuracy by only 7%. We also found a minor difference in overhead between simulation and implementation. We analyze how this may be due to assumptions made in simulation regarding packet delays and queuing, or the soft stop condition we implemented in WFDefProxy to detect the end of a page load. The implementation of FRONT cost about 23% more data overhead than simulation, while the implementation of Tamaraw cost about 28% - 45% less data overhead. In addition, the implementation of Tamaraw incurred only 21% time overhead, compared to 51% - 242% estimated by simulation in previous work.

Accelerating Deep Learning with Dynamic Data Pruning

Permalink - Posted on 2021-11-24 16:47

Deep learning's success has been attributed to the training of large, overparameterized models on massive amounts of data. As this trend continues, model training has become prohibitively costly, requiring access to powerful computing systems to train state-of-the-art networks. A large body of research has been devoted to addressing the cost per iteration of training through various model compression techniques like pruning and quantization. Less effort has been spent targeting the number of iterations. Previous work, such as forget scores and GraNd/EL2N scores, address this problem by identifying important samples within a full dataset and pruning the remaining samples, thereby reducing the iterations per epoch. Though these methods decrease the training time, they use expensive static scoring algorithms prior to training. When accounting for the scoring mechanism, the total run time is often increased. In this work, we address this shortcoming with dynamic data pruning algorithms. Surprisingly, we find that uniform random dynamic pruning can outperform the prior work at aggressive pruning rates. We attribute this to the existence of "sometimes" samples -- points that are important to the learned decision boundary only some of the training time. To better exploit the subtlety of sometimes samples, we propose two algorithms, based on reinforcement learning techniques, to dynamically prune samples and achieve even higher accuracy than the random dynamic method. We test all our methods against a full-dataset baseline and the prior work on CIFAR-10 and CIFAR-100, and we can reduce the training time by up to 2x without significant performance loss. Our results suggest that data pruning should be understood as a dynamic process that is closely tied to a model's training trajectory, instead of a static step based solely on the dataset alone.

A Method for Evaluating the Capacity of Generative Adversarial Networks to Reproduce High-order Spatial Context

Permalink - Posted on 2021-11-24 15:58

Generative adversarial networks are a kind of deep generative model with the potential to revolutionize biomedical imaging. This is because GANs have a learned capacity to draw whole-image variates from a lower-dimensional representation of an unknown, high-dimensional distribution that fully describes the input training images. The overarching problem with GANs in clinical applications is that there is not adequate or automatic means of assessing the diagnostic quality of images generated by GANs. In this work, we demonstrate several tests of the statistical accuracy of images output by two popular GAN architectures. We designed several stochastic object models (SOMs) of distinct features that can be recovered after generation by a trained GAN. Several of these features are high-order, algorithmic pixel-arrangement rules which are not readily expressed in covariance matrices. We designed and validated statistical classifiers to detect the known arrangement rules. We then tested the rates at which the different GANs correctly reproduced the rules under a variety of training scenarios and degrees of feature-class similarity. We found that ensembles of generated images can appear accurate visually, and correspond to low Frechet Inception Distance scores (FID), while not exhibiting the known spatial arrangements. Furthermore, GANs trained on a spectrum of distinct spatial orders did not respect the given prevalence of those orders in the training data. The main conclusion is that while low-order ensemble statistics are largely correct, there are numerous quantifiable errors per image that plausibly can affect subsequent use of the GAN-generated images.

An innovative eye-tracker. Main features and demonstrative tests

Permalink - Posted on 2021-11-24 15:45

We present a set of results obtained with an innovative eye-tracker based on magnetic dipole localization by means of an array of magnetoresistive sensors. The system tracks both head and eye movements with a high rate (100-200 Sa/s) and in real time. A simple setup is arranged to simulate head and eye motions and to test the tracker performance under realistic conditions. Multimedia material is provided to substantiate and exemplify the results.

Fluctuations in Salem--Zygmund almost sure central limit theorem

Permalink - Posted on 2021-11-24 15:44

Let us consider i.i.d. random variables $\{a_k,b_k\}_{k \geq 1}$ defined on a common probability space $(\Omega, \mathcal F, \mathbb P)$, following a symmetric Rademacher distribution and the associated random trigonometric polynomials $S_n(\theta)= \frac{1}{\sqrt{n}} \sum_{k=1}^n a_k \cos(k\theta)+b_k \sin(k\theta)$. A seminal result by Salem and Zygmund ensures that $\mathbb{P}-$almost surely, $\forall t\in\mathbb{R}$ \[ \lim_{n \to +\infty} \frac{1}{2\pi}\int_0^{2\pi} e^{i t S_n(\theta)}d\theta=e^{-t^2/2}. \] This result was then further generalized in various directions regarding the coefficients distribution, their dependency structure or else the dimension and the nature of the ambient manifold. To the best of our knowledge, the natural question of the fluctuations in the above limit has not been tackled so far and is precisely the object of this article. Namely, for general i.i.d. symmetric random coefficients having a finite sixth-moment and for a large class of continuous test functions $\phi$ we prove that \[ \sqrt{n}\left(\frac{1}{2\pi}\int_0^{2\pi} \phi(S_n(\theta))d\theta-\int_{\mathbb{R}}\phi(t)\frac{e^{-\frac{t^2}{2}}dt}{\sqrt{2\pi}}\right)\xrightarrow[n\to\infty]{\text{Law}}~\mathcal{N}\left(0,\sigma_{\phi}^2+\frac{c_2(\phi)^2}{2}\left(\mathbb{E}(a_1^4)-3\right)\right). \] Here, the constant $\sigma_{\phi}^2$ is explicit and corresponds to the limit variance in the case of Gaussian coefficients and $c_2(\phi)$ is the coefficient of order $2$ in the decomposition of $\phi$ in the Hermite polynomial basis. Surprisingly, it thus turns out that the fluctuations are not universal since they both involve the kurtosis of the coefficients and the second coefficient of $\phi$ in the Hermite basis.

Autonomous bot with ML-based reactive navigation for indoor environment

Permalink - Posted on 2021-11-24 15:24

Local or reactive navigation is essential for autonomous mobile robots which operate in an indoor environment. Techniques such as SLAM, computer vision require significant computational power which increases cost. Similarly, using rudimentary methods makes the robot susceptible to inconsistent behavior. This paper aims to develop a robot that balances cost and accuracy by using machine learning to predict the best obstacle avoidance move based on distance inputs from four ultrasonic sensors that are strategically mounted on the front, front-left, front-right, and back of the robot. The underlying hardware consists of an Arduino Uno and a Raspberry Pi 3B. The machine learning model is first trained on the data collected by the robot. Then the Arduino continuously polls the sensors and calculates the distance values, and in case of critical need for avoidance, a suitable maneuver is made by the Arduino. In other scenarios, sensor data is sent to the Raspberry Pi using a USB connection and the machine learning model generates the best move for navigation, which is sent to the Arduino for driving motors accordingly. The system is mounted on a 2-WD robot chassis and tested in a cluttered indoor setting with most impressive results.

Non-Intrusive Binaural Speech Intelligibility Prediction from Discrete Latent Representations

Permalink - Posted on 2021-11-24 14:55

Non-intrusive speech intelligibility (SI) prediction from binaural signals is useful in many applications. However, most existing signal-based measures are designed to be applied to single-channel signals. Measures specifically designed to take into account the binaural properties of the signal are often intrusive - characterised by requiring access to a clean speech signal - and typically rely on combining both channels into a single-channel signal before making predictions. This paper proposes a non-intrusive SI measure that computes features from a binaural input signal using a combination of vector quantization (VQ) and contrastive predictive coding (CPC) methods. VQ-CPC feature extraction does not rely on any model of the auditory system and is instead trained to maximise the mutual information between the input signal and output features. The computed VQ-CPC features are input to a predicting function parameterized by a neural network. Two predicting functions are considered in this paper. Both feature extractor and predicting functions are trained on simulated binaural signals with isotropic noise. They are tested on simulated signals with isotropic and real noise. For all signals, the ground truth scores are the (intrusive) deterministic binaural STOI. Results are presented in terms of correlations and MSE and demonstrate that VQ-CPC features are able to capture information relevant to modelling SI and outperform all the considered benchmarks - even when evaluating on data comprising of different noise field types.

Statistical significance of the sterile-neutrino hypothesis in the context of reactor and gallium data

Permalink - Posted on 2021-11-24 14:54

We evaluate the statistical significance of the 3+1 sterile-neutrino hypothesis using $\nu_e$ and $\bar\nu_e$ disappearance data from reactor, solar and gallium radioactive source experiments. Concerning the latter, we investigate the implications of the recent BEST results. For reactor data we focus on relative measurements independent of flux predictions. For the problem at hand, the usual $\chi^2$-approximation to hypothesis testing based on Wilks' theorem has been shown in the literature to be inaccurate. We therefore present results based on Monte Carlo simulations, and find that this typically reduces the significance by roughly $1\,\sigma$ with respect to the na\"ive expectation. We find no significant indication of sterile-neutrino oscillations from reactor data. On the other hand, gallium data (dominated by the BEST result) show more than $5\,\sigma$ of evidence supporting the sterile-neutrino hypothesis, favoring oscillation parameters in agreement with reactor data. This explanation is, however, in significant tension ($\sim 3\,\sigma$) with solar neutrino experiments. In order to assess the robustness of the signal for gallium experiments we present a discussion of the impact of cross-section uncertainties on the results.

Causality-inspired Single-source Domain Generalization for Medical Image Segmentation

Permalink - Posted on 2021-11-24 14:45

Deep learning models usually suffer from domain shift issues, where models trained on one source domain do not generalize well to other unseen domains. In this work, we investigate the single-source domain generalization problem: training a deep network that is robust to unseen domains, under the condition that training data is only available from one source domain, which is common in medical imaging applications. We tackle this problem in the context of cross-domain medical image segmentation. Under this scenario, domain shifts are mainly caused by different acquisition processes. We propose a simple causality-inspired data augmentation approach to expose a segmentation model to synthesized domain-shifted training examples. Specifically, 1) to make the deep model robust to discrepancies in image intensities and textures, we employ a family of randomly-weighted shallow networks. They augment training images using diverse appearance transformations. 2) Further we show that spurious correlations among objects in an image are detrimental to domain robustness. These correlations might be taken by the network as domain-specific clues for making predictions, and they may break on unseen domains. We remove these spurious correlations via causal intervention. This is achieved by stratifying the appearances of potentially correlated objects. The proposed approach is validated on three cross-domain segmentation tasks: cross-modality (CT-MRI) abdominal image segmentation, cross-sequence (bSSFP-LGE) cardiac MRI segmentation, and cross-center prostate MRI segmentation. The proposed approach yields consistent performance gains compared with competitive methods when tested on unseen domains.

Softmax Gradient Tampering: Decoupling the Backward Pass for Improved Fitting

Permalink - Posted on 2021-11-24 13:47

We introduce Softmax Gradient Tampering, a technique for modifying the gradients in the backward pass of neural networks in order to enhance their accuracy. Our approach transforms the predicted probability values using a power-based probability transformation and then recomputes the gradients in the backward pass. This modification results in a smoother gradient profile, which we demonstrate empirically and theoretically. We do a grid search for the transform parameters on residual networks. We demonstrate that modifying the softmax gradients in ConvNets may result in increased training accuracy, thus increasing the fit across the training data and maximally utilizing the learning capacity of neural networks. We get better test metrics and lower generalization gaps when combined with regularization techniques such as label smoothing. Softmax gradient tampering improves ResNet-50's test accuracy by $0.52\%$ over the baseline on the ImageNet dataset. Our approach is very generic and may be used across a wide range of different network architectures and datasets.

Eigenstate Thermalization in Long-Range Interacting Systems

Permalink - Posted on 2021-11-24 13:27

Motivated by recent ion experiments on tunable long-range interacting quantum systems [B.Neyenhuis et.al., Sci.Adv.3, 1 (2017)], we test the strong eigenstate thermalization hypothesis (ETH) for systems with power-law interactions $\sim r^{-\alpha}$. We numerically demonstrate that the strong ETH typically holds for systems with $\alpha\leq 0.6$, which include Coulomb, monopole-dipole, and dipole-dipole interactions. Compared with short-range interacting systems, the eigenstate expectation value of a generic local observable is shown to deviate significantly from its microcanonical ensemble average for long-range interacting systems. We find that Srednicki's ansatz breaks down for $\alpha\lesssim 1.0$.

General Analytical Conditions for Inflaton Fragmentation: Quick and Easy Tests for its Occurrence

Permalink - Posted on 2021-11-24 13:06

Understanding the physics of inflaton condensate fragmentation in the early Universe is crucial as the existence of fragments in the form of non-topological solitons (oscillons or Q-balls) may potentially modify the evolution of the post-inflation Universe. Furthermore, such fragments may evolve into primordial black holes and form dark matter, or emit gravitational waves. Due to the non-perturbative and non-linear nature of the dynamics, most of the studies rely on numerical lattice simulations. Numerical simulations of condensate fragmentation are, however, challenging, and, without knowing where to look in the parameter space, they are likely to be time-consuming as well. In this paper, we provide generic analytical conditions for the perturbations of an inflaton condensate to undergo growth to non-linearity in the cases of both symmetric and asymmetric inflaton potentials. We apply the conditions to various inflation models and demonstrate that our results are in good agreement with explicit numerical simulations. Our analytical conditions are easy to use and may be utilised in order to quickly identify models that may undergo fragmentation and determine conditions under they do so, which can guide subsequent in-depth numerical analyses.

Photoreverberation mapping of quasars in the context of LSST observing strategies

Permalink - Posted on 2021-11-24 10:57

The upcoming photometric surveys, such as the Rubin Observatory's Legacy Survey of Space and Time (LSST) will monitor unprecedented number of active galactic nuclei (AGN) in a decade long campaign. Motivated by the science goals of LSST, which includes the harnessing of broadband light curves of AGN for photometric reverberation mapping (PhotoRM), we implement the existing formalism to estimate the lagged response of the emission line flux to the continuum variability using only mutli-band photometric light curves. We test the PhotoRM method on a set of 19 artificial light curves simulated using a stochastic model based on the Damped Random Walk process. These light curves are sampled using different observing strategies, including the two proposed by the LSST, in order to compare the accuracy of time-lag retrieval based on different observing cadences. Additionally, we apply the same procedure for time-lag retrieval to the observed photometric light curves of NGC 4395, and compare our results to the existing literature.