Analysis of harmonic parameters and detection of
foreign frequencies in diagnostic signals, which are most often
interpreted as fault results, may be problematic because of
the spectral leakage effect. When the signal contains only the
fundamental frequency and harmonics, it is possible to adjust
its spectral resolution to eliminate any distortions for regular
frequencies. The paper discusses the influence of resampling
distortions on the quality of spectral resolution optimization in
diagnostic signals, recorded digitally for objects in a steady state.
The method effectiveness is measured with the use of a synthetic
signal generated from an analog prototype whose parameters
are known. In order to achieve low values of harmonic amplitude
errors in the diagnostic signal, a high quality resampling
algorithm should be used, therefore the analysis of distortions
generated by four popular reasampling methods is performed.
Errors are measured for test signals containing different spectral
structures. Finally, the results of the test of the analyzed method
in practical applications are presented.
Video walls are useful to display large size video
content. Empowered video walls combine display functionality
with computing power. Such video walls can display large
scientific visualizations. If they can also display high-resolution
video streamed over a network, they could enable distance
collaboration over scientific data.
We proposed several methods of network streaming of highresolution
video content to a major type of empowered video
walls, which is the SAGE2 system. For all methods, we evaluated
their performance and discussed their scalability and properties.
The results should be applicable to other web-based empowered
video walls as well.
This article provides a comparison of a three methods
that can be used for calculating effective coverage of image
quality assessment database. The aim of this metric is to show
how well the database is filled with variety of images. For
each image in the database the Spatial Information (SI) and
Colorfulness (CF) metric is calculated. The area of convex hull
containing all the points on SI x CF plane is indication of total
coverage of the database, but it does not show how efficiently
this area is utilized. For this purpose an effective coverage was
introduced. An analysis is performed for 16 databases - 13
publicaly available and 3 artificial created for the purpose of
showing advantages of the effective coverage.
Keypoint detection is a basic step in many computer
vision algorithms aimed at recognition of objects, automatic
navigation and analysis of biomedical images. Successful implementation
of higher level image analysis tasks, however, is
conditioned by reliable detection of characteristic image local
regions termed keypoints. A large number of keypoint detection
algorithms has been proposed and verified. In this paper we
discuss the most important keypoint detection algorithms. The
main part of this work is devoted to description of a keypoint
detection algorithm we propose that incorporates depth
information computed from stereovision cameras or other depth
sensing devices. It is shown that filtering out keypoints that
are context dependent, e.g. located at boundaries of objects
can improve the matching performance of the keypoints which
is the basis for object recognition tasks. This improvement is
shown quantitatively by comparing the proposed algorithm to
the widely accepted SIFT keypoint detector algorithm. Our study
is motivated by a development of a system aimed at aiding the
visually impaired in space perception and object identification.
As the most recent video coding standard, High Efficiency
Video Coding (HEVC) adopts various novel techniques,
including a quad-tree based coding unit (CU) structure and
additional angular modes used for intra encoding. These new
techniques achieve a notable improvement in coding efficiency
at the penalty of significant computational complexity increase.
Thus, a fast HEVC coding algorithm is highly desirable. In this
paper, we propose a fast intra CU decision algorithm for HEVC
to reduce the coding complexity, mainly based on a key-point
detection. A CU block is considered to have multiple gradients
and is early split if corner points are detected inside the block. On
the other hand, a CU block without corner points is treated to be
terminated when its RD cost is also small according to statistics
of the previous frames. The proposed fast algorithm achieves
over 62% encoding time reduction with 3.66%, 2.82%, and
2.53% BD-Rate loss for Y, U, and V components, averagely. The
experimental results show that the proposed method is efficient
to fast decide CU size in HEVC intra coding, even though only
static parameters are applied to all test sequences.
In normal conditions, the Critical Flicker Frequency
is usually 60Hz. But in some special conditions, such as
low spatial frequency and high contrast between frames, these
special conditions have high probability to occur in some TPVMbased
applications. So it’s extremely important to verify if a visual
signal with a combination of temporal and spatial frequency can
be recognize by human eyes. Based on the research in the last
paper ’ ’Window of Visibility’ inspired security lighting system’,
this paper introduces the measuring method of WoV of human
eyes. In this paper we will measure critical flicker frequency in
low spatial frequency and high contrast conditions, and we can
witness a different conclusion from the normal conditions.
In the paper we consider fast transformation of a
multilevel and multioutput circuit with AND, OR and NOT gates
into a functionally equivalent circuit with NAND and NOR gates.
The task can be solved by replacing AND and OR gates by
NAND or NOR gates, which requires in some cases introducing
the additional inverters or splitting the gates. In the paper the
quick approximation algorithms of the circuit transformation are
proposed, minimizing number of the inverters. The presented
algorithms allow transformation of any multilevel circuit into a
circuit being a combination of NOR gates, NAND gates or both
types of universal gates.
This article presents a review of the investigation
of the possibility of increasing the efficiency of existing line test
solutions for troubleshooting testing for IPTV over xDSL, by the
results of experimental research on real system under commercial
exploitation. At the beginning of this article the main weaknesses
of the existing troubleshooting testing are described. In the
continuation of the article the physical layer parameters of xDSL
transceiver are listed. This article also provides a few specific
examples of xDSL lines with their physical layer parameters of
xDSL transceivers followed by analysis how they can be used
for the purposes of more efficient measurement of parameters of
copper pairs.
In this paper, the second-generation CMOS currentcontrolled-
current-conveyor based on differential pair of
operational transconductance amplifier has been researched and
presented. Since the major improvement of its parasitic resistance
at x-port can be linearly controlled by an input bias current, the
proposed building block is then called “The Second-Generation
Electronically-tunable Current-controlled Current Conveyor”
(ECCCI). The applications are demonstrated in form of both 2
quadrant and 4 quadrant current-mode signal multiplier circuits.
Characteristics of the proposed ECCCII and its application are
simulated by the PSPICE program from which the results are
proved to be in agreement with the theory.
Study of the trajectories of the motion of satellites
remains an urgent task for modern science. This is especially true
for GNSS systems and for satellites intended for Earth remote
sensing. The basis of their operation is to accurately determine the
position of the satellite, and the parameters of signal propagation.
Considering the great distances and speeds of both satellites and
the Earth in calculating these parameters, it is necessary to take
into account the special and general theory of relativity. In the
article formulas have been derived for calculating additional
corrections for relativistic effects. A mathematical model for
calculating the metric tensor was created. A sequence of correction
was also proposed.
Following the results presented in [21], we present
an efficient approach to the Schur parametrization/modeling of a
subclass of second-order time-series which we term p-stationary
time-series, yielding a uniform hierarchy of algorithms suitable
for efficient implementations and being a good starting point
for nonlinear generalizations to higher-order non-Gaussian nearstationary
time-series.
The Traffic Flow Description (TFD) option of the IP
protocol is an experimental option, designed by the Authors and
described by the IETF’s Internet Draft. This option was intended
for signalling for QoS purposes. Knowledge about forthcoming
traffic (such as the amount of data that will be transferred in
a given period of time) is conveyed in the fields of the option
between end-systems. TFD-capable routers on a path (or a
multicast tree) between the sender and receiver(s) are able to read
this information, process it and use it for bandwidth allocation. If
the time horizons are short enough, bandwidth allocation will be
performed dynamically. In the paper a performance evaluation
of an HD video transmission QoS assured with the use of the
TFD option is presented. The analysis was made for a variable
number of video streams and a variable number of TCP flows
that compete with the videos for the bandwidth of the shared
link. Results show that the dynamic bandwidth allocation using
the TFD option better assures the QoS of HD video than the
classic solution, based on the RSVP protocol.
In modern digital world, there is a strong demand
for efficient data streams processing methods. One of application
areas is cybersecurity — IPsec is a suite of protocols that adds
security to communication at the IP level. This paper presents
principles of high-performance FPGA architecture for data
streams processing on example of IPsec gateway implementation.
Efficiency of the proposed solution allows to use it in networks
with data rates of several Gbit/s.
WILGA annual symposium on advanced photonic
and electronic systems has been organized by young scientist for
young scientists since two decades. It traditionally gathers around
400 young researchers and their tutors. Ph.D students and
graduates present their recent achievements during well attended
oral sessions. Wilga is a very good digest of Ph.D. works carried
out at technical universities in electronics and photonics, as well as
information sciences throughout Poland and some neighboring
countries. Publishing patronage over Wilga keep Elektronika
technical journal by SEP, IJET and Proceedings of SPIE. The
latter world editorial series publishes annually more than 200
papers from Wilga. Wilga 2018 was the XLII edition of this
meeting. The following topical tracks were distinguished:
photonics, electronics, information technologies and system
research. The article is a digest of some chosen works presented
during Wilga 2018 symposium. WILGA 2017 works were
published in Proc. SPIE vol.10445. WILGA 2018 works were
published in Proc. SPIE vol.10808.
In this article we construct a finite-difference scheme
for the three-dimensional equations of the atmospheric boundary
layer. The solvability of the mathematical model is proved and
quality properties of the solutions are studied. A priori estimates
are derived for the solution of the differential equations. The
mathematical questions of the difference schemes for the
equations of the atmospheric boundary layer are studied.
Nonlinear terms are approximated such that the integral term of
the identity vanishes when it is scalar multiplied. This property of
the difference scheme is formulated as a lemma. Main a priori
estimates for the solution of the difference problem are derived.
Approximation properties are investigated and the theorem of
convergence of the difference solution to the solution of the
differential problem is proved.
The article is devoted to the method facilitating the
diagnostics of dynamic faults in networks of interconnection in
systems-on-chips. It shows how to reconstruct the erroneous test
response sequence coming from the faulty connection based on
the set of signatures obtained as a result of multiple compaction
of this sequence in the MISR register with programmable
feedback. The Chinese reminder theorem is used for this purpose.
The article analyzes in detail the various hardware realizations of
the discussed method. The testing time associated with each
proposed solution was also estimated. Presented method can be
used with any type of test sequence and test pattern generator. It
is also easily scalable to any number of nets in the network of
interconnections. Moreover, it supports finding a trade-off
between area overhead and testing time.
The paper presents Improved Adaptive Arithmetic
Coding algorithm for application in future video compression
technology. The proposed solution is based on the Context-based
Adaptive Binary Arithmetic Coding (CABAC) technique and
uses the authors mechanism of symbols probability estimation
that exploits Context-Tree Weighting (CTW) technique. This
paper proposes the version of the algorithm, that allows an
arbitrary selection of depth D of context trees, when activating
the algorithm in the framework of the AVC or HEVC video
encoders. The algorithm has been tested in terms of coding
efficiency of data and its computational complexity. Results
showed, that depending on depth of context trees from 0.1% to
0.86% reduction of bitrate is achieved, when using the algorithm
in the HEVC video encoder and 0.4% to 2.3% compression gain
in the case of the AVC. The new solution increases complexity of
entropy encoder itself, however, this does not cause an increase
of the complexity of the whole video encoder.
In the paper, two preprocessing methods for virtual
view synthesis are presented. In the first approach, both
horizontal and vertical resolutions of the real views and the
corresponding depth maps are doubled in order to perform
view synthesis on images with densely arranged points. In the
second method, real views are filtered in order to eliminate
blurred or improperly shifted edges of the objects. Both methods
are performed prior to synthesis, thus they may be applied
to different Depth-Image-Based Rendering algorithms. In the
paper, for both proposed methods, the achieved quality gains
are presented.
In this paper, a modification of the graph-based
depth estimation is presented. The purpose of proposed modification
is to increase the quality of estimated depth maps, reduce the
time of the estimation, and increase the temporal consistency of
depth maps. The modification is based on the image segmentation
using superpixels, therefore in the first step of the proposed
modification a segmentation of previous frames is used in the
currently processed frame in order to reduce the overall time
of the depth estimation. In the next step, a depth map from the
previous frame is used in the depth map optimization as the
initial values of a depth map estimated for the current frame.
It results in the better representation of silhouettes of objects in
depth maps and in the reduced computational complexity of the
depth estimation process. In order to evaluate the performance of
the proposed modification the authors performed the experiment
for a set of multiview test sequences that varied in their content
and an arrangement of cameras. The results of the experiments
confirmed the increase of the depth maps quality — the quality of
depth maps calculated with the proposed modification is higher
than for the unmodified depth estimation method, apart from
the number of the performed optimization cycles. Therefore,
use of the proposed modification allows to estimate a depth of
the better quality with almost 40% reduction of the estimation
time. Moreover, the temporal consistency, measured through
the reduction of the bitrate of encoded virtual views, was also
considerably increased.
Optimization of encoding process in video compression
is an important research problem, especially in the case of
modern, sophisticated compression technologies. In this paper, we
consider HEVC, for which a novel method for selection of the
encoding modes is proposed. By the encoding modes we mean e.g.
coding block structure, prediction types and motion vectors. The
proposed selection is done basing on noise-reduced version of the
input sequence, while the information about the video itself, e.g.
transform coefficients, is coded basing on the unaltered input.
The proposed method involves encoding of two versions of the
input sequence. Further, we show realization proving that the
complexity is only negligibly higher than complexity of a single
encoding. The proposal has been implemented in HEVC reference
software from MPEG and tested experimentally. The results show
that the proposal provides up to 1.5% bitrate reduction while
preserving the same quality of a decoded video.