Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Journal of Optical Technology c/c of Opticheskii Zhurnal, 2018
This paper introduces an application of artificial neural networks for visualization of functions of neurons in the higher visual areas of the brain. First, a model that enables the prediction of an evoked neural response was implemented. The model has a correlation coefficient of up to 0.82 for certain cortical columns. Then, an approach to explaining representations encoded by neurons was proposed. The approach is based on generating images maximizing an activation in the model. A comparison of the visualization results with the experimental data suggests that the approach can be used to study the properties of the higher-level areas of the visual cortex.
Recommended citation: Malakhova, K. (2018). "Visualization of information encoded by neurons in the higher-level areas of the visual system." Journal of Optical Technology. 85. 494. 10.1364/JOT.85.000494.
Published in Scientific Report, 2021
Humans recognize individual faces regardless of variation in the facial view. The view-tuned face neurons in the inferior temporal (IT) cortex are regarded as the neural substrate for view-invariant face recognition. This study approximated visual features encoded by these neurons as combinations of local orientations and colors, originated from natural image fragments. The resultant features reproduced the preference of these neurons to particular facial views. We also found that faces of one identity were separable from the faces of other identities in a space where each axis represented one of these features. These results suggested that view-invariant face representation was established by combining view sensitive visual features. The face representation with these features suggested that, with respect to view-invariant face representation, the seemingly complex and deeply layered ventral visual pathway can be approximated via a shallow network, comprised of layers of low-level processing for local orientations and colors (V1/V2-level) and the layers which detect particular sets of low-level elements derived from natural image fragments (IT-level).
Recommended citation: Nam, Y., Sato, T., Uchida, G. et al. View-tuned and view-invariant face encoding in IT cortex is explained by selected natural image fragments. Sci Rep 11, 7827 (2021). https://doi.org/10.1038/s41598-021-86842-7 https://www.nature.com/articles/s41598-021-86842-7
Published:
It is common to compare properties of visual information processing by artificial neural networks and the primate visual system. Some remarkable similarities were observed in the responses of neurons in IT cortex and units in higher layers of CNNs. Here I show that latent representations formed by weights in convolutional layers do not necessarily reflect visual domain. Instead they are strongly dependent on a choice of training set and cost function. The most striking example is when an individual unit, which is highly selective to some members of a category is, nevertheless, inhibited by visually similar objects of the same category. And this surprising selectivity-profile cannot be attributed to incidental differences in low level statistics.
Download extended abstract here
Published:
I show how using an approach from signal detection theory can be applied to study high-level representations in a deep neural network to identify sensitive detectors. The results suggest that category-selective filters can be observed from the first layers of deep neural networks. I also show that a tuning curve of a category-selective filter differs from what is usually seen in a neural data. Representations of objects in weights of convolutional layers do not necessarily reflect the perceptual similarity of images. Instead, they are strongly dependent on a choice of a training set and cost function. This properties of neural networks may lead to the behavior when a model detect objects in their absence and fails to recognize obvious cases from a human-observer point of view.
Published:
In this talk I show how convolutional neural networks (CNNs) and Generative Adversarial Networks (GANs) can be used to study latent representations, coded by neurons in high visual areas. To explain neural preferences, I train a deep network to simulate the response of the neurons in the Inferior temporal (IT) cortex of macaque monkey. The model has high performance and explain > 0.65% of the variance in the neural data. I visualize latent representations of artificial neurons using a generative adversarial network. The approach allows to find an input signal that maximizes activation of an individual unit without limitations introduced by a dataset.
Published:
A talk about recent results in application of generative networks for understanding of information encoded by high-level neurons in deep networks and macaque monkeys.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.