Wyniki wyszukiwania

Filtruj wyniki

  • Czasopisma
  • Autorzy
  • Słowa kluczowe
  • Data
  • Typ

Wyniki wyszukiwania

Wyników: 30
Wyników na stronie: 25 50 75
Sortuj wg:

Abstrakt

The method that is proposed in the present paper is a special case of squared M split estimation. It concerns a direct estimation of the shift between the parameters of the functional models of geodetic observations. The shift in question may result from, for example, deformation of a geodetic network or other non-random disturbances that may influence coordinates of the network points. The paper also presents the example where such shift is identified with a phase displacement of a wave. The shift is estimated on the basis of wave observations and without any knowledge where such displacement took place. The estimates of the shift that are proposed in the paper are named Shift- M split estimators.
Przejdź do artykułu

Autorzy i Afiliacje

Robert Duchnowski
Zbigniew Wiśniewski

Abstrakt

W pracy przedstawiono wyniki badań nad określeniem zależności pomiędzy stopniem powiązania obserwacji po wyrównaniu a rzędami koegzystencji tych obserwacji. Zaproponowano przybliżony model tej zależności, pozwalający oszacować stopień powiązania obserwacji bez konieczności realizacji procedury wyrównawczej. Model ten może mieć zastosowanie w procedurach wykrywania błędów grubych w obserwacjach. Podano również uzupełniający algorytm ustalania rzędów koegzystencji obserwacji na podstawie macierzy współczynników równań obserwacyjnych.
Przejdź do artykułu

Autorzy i Afiliacje

Mieczysław Kwaśniak

Abstrakt

Celem niniejszej pracy jest opracowanie i przetestowanie algorytmu wyrównania obserwacji geodezyjnych, odpornego na błędy grube (metoda estymacji mocnych), z zastosowaniem zaproponowanej przez autora nowej funkcji tłumienia. Wyprowadzono wzory szczegółowe funkcji tłumienia jako składowej funkcji celu w modyfikowanej klasycznej metodzie najmniejszych kwadratów. Podano również kryteria doboru parametrów sterujących funkcji tłumienia. Skuteczność działania przedstawionego algorytmu wyrównania zweryfikowano na dwóch przykładach numerycznych. Analizę otrzymanych wyników przeprowadzono w odniesieniu do metod wyrównania odpornego, wykorzystujących inne, znane funkcje tłumienia (np. funkcja Hampela).
Przejdź do artykułu

Autorzy i Afiliacje

Tadeusz Gargula
ORCID: ORCID

Abstrakt

This study attempted to examine the impacts of academic locus of control and metacognitive awareness on the academic adjustment of the student participants. The convenient sampling was applied to select the sample of 368 participants comprising 246 internals with age ranging from 17 to 28 years (M = 20.52, SD = 2.10) and 122 externals with age spanning from 17 to 28 years (M = 20.57, SD = 2.08). The findings indicated that there were significant differences in the various dimensions of metacognition, academic lifestyle and academic achievement of the internals and externals except for academic motivation and overall academic adjustment. There were significant gender differences in declarative knowledge, procedural knowledge, conditional knowledge, planning, information management, monitoring, evaluation and overall metacognitive awareness. Likewise, the internals and externals differed significantly in their mean scores of declarative knowledge, procedural knowledge, conditional knowledge, planning, information management, monitoring, debugging, evaluation and overall metacognitive awareness, academic lifestyle and academic achievement. The significant positive correlations existed between the scores of metacognitive awareness and academic adjustment. It was evident that the internal academic locus of control and metacognitive awareness were significant predictors of academic adjustment of the students. The findings have been discussed in the light of recent findings of the field. The findings of the study have significant implications to understand the academic success and adjustment of the students and thus, relevant for teachers, educationists, policy makers and parents. The future directions for the researchers and limitations of the study have also been discussed.

Przejdź do artykułu

Autorzy i Afiliacje

Deepika Jain
Gyanesh Kumar Tiwari
Ishdutta Awasthi

Abstrakt

The process of railway track adjustment is a task which includes bringing, in geometrical terms, the actual track axis to the position ensuring safe and efficient traffic of rail vehicles. The initial calculation stage of this process is to determine approximately the limits of sections of different geometry, i.e. straight lines, arcs and transition curves. This allows to draw up a draft alignment design, which is subject to control the position relative to the current state. In practice, this type of a project rarely meets the requirements associated with the values of corrective alignments. Therefore, it becomes necessary to apply iterated correction of a solution in order to determine the final project, allowing to introduce minor corrections while maintaining the assumed parameters of the route. The degree of complexity of this process is defined by the quality of determining a preliminary draft alignment design. Delimitation of the sections for creation of creating such a design, is usually done by using the curvature diagram (InRail v8.7 Reference Guide [1], Jamka et al [2], Strach [3]), which is, however, sensitive to the misalignment of the track and measurement errors. In their paper Lenda and Strach [4] proposed a new method for creating curvature diagram, based on approximating spline function, theoretically allowing, inter alia, to reduce vulnerability to interference factors. In this study, the method to determine a preliminary draft alignment design for the track with severe overexploitation was used, and thus in the conditions adversely affecting the accuracy of the conducted readings. The results were compared to the ones obtained using classical curvature diagram. The obtained results indicate that the method allows to increase the readability of a curvature graph, which at considerable deregulation of a track takes an irregular shape, difficult to interpret. The method also favourably affects the accuracy of determining the initial parameters of the project, reducing the entire process of calculation.

Przejdź do artykułu

Autorzy i Afiliacje

G. Lenda

Abstrakt

The objective of this paper is to derive the characteristics of an effective governance framework ensuring incentives for conducting a prudent fiscal policy.We study this problem with the use of econometric tools and a sample of 28 European Union Member States between 2003 and 2017. By looking at specific reforms and measures, not only we verify the synthetic effectiveness of fiscal constraints but also we analyse specific elements of the governance framework.Our study shows that fiscal balances are affected not only by the economic cycle, but, among others, by the level of public debt and its cost. We find that the existence of numerical fiscal rules, in that specifically revenue and expenditures rules, their strong legal entrenchment, surveillance mechanisms, sanctions, and flexibility with respect to business cycle have a significant impact on curbing deficits.

Przejdź do artykułu

Autorzy i Afiliacje

Grzegorz Poniatowski

Abstrakt

The paper considers a private ownership economy in which economic agents could realize their aims at given prices, Walras Law is satisfied but agents’ optimal plans of action do not lead to an equilibrium in the economy. It means that the market clearing condition is not satisfied for agents’ optimal plans of action. In this context, the paper puts forward three specific adjustment processes resulting in equilibrium in a transformation of the initial economy. Specifically, it is shown, by the use of strict mathematical reasoning, that if there is no equilibrium in a private ownership economy at given prices, then, under some natural economic assumptions, after a mild evolution of the production sector, equilibrium at unchanged prices can be achieved.

Przejdź do artykułu

Autorzy i Afiliacje

Agnieszka Lipieta

Abstrakt

The work presents the results of studies on dependence of effectiveness of chosen robust estimation methods from the internal reliability level of a geodetic network. The studies use computer-simulated observation systems, so it was possible to analyse many variants differing from each other in a planned way. Four methods of robust estimation have been chosen for the studies, differing substantially in the approach to weight modifications. For comparative reasons, the effectiveness studies have also been conducted for the very popular method in surveying practice, of gross error detection basing on LS estimation results, the so called iterative data snooping. The studies show that there is a relation between the level of network internal reliability and the effectiveness of robust estimation methods. In most cases, in which the observation contaminated by a gross error was characterized by a low index of internal reliability, the robust estimation led to results being essentially far from expectations.
Przejdź do artykułu

Autorzy i Afiliacje

Mieczysław Kwaśniak

Abstrakt

A geodesic survey of an existing route requires one to determine the approximation curve by means of optimization using the total least squares method (TLSM). The objective function of the LSM was found to be a square of the Mahalanobis distance in the adjustment field ν . In approximation tasks, the Mahalanobis distance is the distance from a survey point to the desired curve. In the case of linear regression, this distance is codirectional with a coordinate axis; in orthogonal regression, it is codirectional with the normal line to the curve. Accepting the Mahalanobis distance from the survey point as a quasi-observation allows us to conduct adjustment using a numerically exact parametric procedure. Analysis of the potential application of splines under the NURBS (non-uniform rational B-spline) industrial standard with respect to route approximation has identified two issues: a lack of the value of the localizing parameter for a given survey point and the use of vector parameters that define the shape of the curve. The value of the localizing parameter was determined by projecting the survey point onto the curve. This projection, together with the aforementioned Mahalanobis distance, splits the position vector of the curve into two orthogonal constituents within the local coordinate system of the curve. A similar system corresponds to points that form the control polygonal chain and allows us to find their position with the help of a scalar variable that determines the shape of the curve by moving a knot toward the normal line.
Przejdź do artykułu

Autorzy i Afiliacje

Edward Nowak

Abstrakt

Generally, gross errors exist in observations, and they affect the accuracy of results. We review methods to detect the gross errors by Robust estimation method based on L1-estimation theory and their validity in adjustment of geodetic networks with different condition. In order to detect the gross errors, we transform the weight of accidental model into equivalent one using not standardized residual but residual of observation, and apply this method to adjustment computation of triangulation network, traverse network, satellite geodetic network and so on. In triangulation network, we use a method of transforming into equivalent weight by residual and detect gross error in parameter adjustment without and with condition. The result from proposed method is compared with the one from using standardized residual as equivalent weight. In traverse network, we decide the weight by Helmert variance component estimation, and then detect gross errors and compare by the same way with triangulation network In satellite geodetic network in which observations are correlated, we detect gross errors transforming into equivalent correlation matrix by residual and variance inflation factor and the result is also compared with the result from using standardized residual. The results of detection are shown that it is more convenient and effective to detect gross errors by residual in geodetic network adjustment of various forms than detection by standardized residual.
Przejdź do artykułu

Autorzy i Afiliacje

Jung-Hyang Kim
Chol-Jin Kim
Ryong-Jin Li

Abstrakt

Niniejsza praca składa się z dwóch części. W pierwszej z nich, w nawiązaniu do wcześniejszej pracy autora (Wiśniewski, 2009) przedstawiono teoretyczne podstawy Msplit estymacji. W stosunku do cytowanej pracy, tutaj bardziej szczegółowo omówiono założenia o charakterze probabilistycznym. Wprowadzono także pojęcie f-informacji co pozwoliło na zaproponowanie bardziej ogólnej formy potencjału rozszczepienia. Podstawową treścią tej części pracy jest uogólnienie funkcji celu Msplit estymacji. Dla tej funkcji oraz w odniesieniu do modelu obserwacji geodezyjnych, ustalono problem optymalizacyjny oraz przedstawiono sposób jego rozwiązania. W drugiej części pracy, także w nawiązaniu do cytowanej pracy autora, przedstawiono pewien szczególny przypadek Msplit estymacji nazwany kwadratową Msplitestymacją. Rozwinięto teorię tej wersji M,pli, estymacji oraz przedstawiono kilka przykładów numerycznych wskazujących na jej podstawowe własności oraz możliwe obszary zastosowania.
Przejdź do artykułu

Autorzy i Afiliacje

Zbigniew Wiśniewski

Abstrakt

W niniejszej pracy przedstawiono wyniki badań metody DiSTFA (Displacements and Strains with usage Transformation and Free Adjustnent) wyznaczania przemieszczeń i odkształceń powierzchni wyznaczanych w niestabilnych układach odniesienia. Wyprowadzono także macierze kowariancji umożliwiające ocenę dokładności wyników estymacji. Rozważania teoretyczne uzupełniono przykładem zastosowania na symulowanej, trójwymiarowej sieci geodezyjnej. Uzyskane wyniki zachęcają do przeprowadzenia dalszych, bardziej szczegółowych analiz na rzeczywistych sieciach geodezyjnych.
Przejdź do artykułu

Autorzy i Afiliacje

Waldemar Kamiński
ORCID: ORCID

Abstrakt

W tej części pracy zaprezentowano szczególny przypadek Msplit estymacji, nazwany squared Msplit estymacją. Funkcja celu jest tutaj ustalana na podstawie wypukłych funkcji kwadratowych. Przedstawiono teoretyczne podstawy squared Msplit estymacji, jej algorytm oraz kilka przykładów numerycznych.
Przejdź do artykułu

Autorzy i Afiliacje

Zbigniew Wiśniewski

Abstrakt

W pracy przyjęto, że obserwacją Jest pomierzone przewyższenie odcinka niwelacyjnego, zaś pseudoobserwacją Jest suma obserwacji wykonanych dla kolejnych odcinków tworzących linię niwelacyjną. Przyjęto także, że obserwacje nie są wzajemnie skorelowane. Porównano algorytm 1-lelmena - Pranis-Praniewicza parametrycznego. wielogrupowego (równoległego) wyrównania obserwacji z algorytmem dwuetapowego wyrównania sieci niwelacyjnej. Dwuetapowe wyrównanie składa się z wyrównania pseudoobserwacji metodą najmniejszych kwadratów i wyrównania obserwacji. które wykonywane jest oddzielnie dla każdej linii niwelacyjnej. Wykazano. że równania normalne dotyczące wysokości punktów węzłowych, utworzone w oparciu o pseudoobserwacje. są identyczne ze zredukowanymi równaniami normalnymi utworzonymi w oparciu o obserwacje w procesie wyrównania wielogrupowego. A zatem, wyrównane wysokości punktów węzłowych i ich macierz wariancyjno-kowariancyjna są takie same w przypadku wyrównywania obserwacji i w przypadku wyrównywania pseudoobserwacji. W dalszej kolejności przedstawiono algorytm obliczania wysokości reperów pośrednich linii niwelacyjnych. Wykazano, że wartość błędu średniego 1110 typowej obserwacji/pseudoobserwacji jest taka sama w przypadku wyrównywania obserwacji i w przypadku wyrównywania pseudoobserwacji W konkluzji stwierdzono, że wyniki wyrównania dwuetapowego i ścisłego wyrównania obserwacji są identyczne.
Przejdź do artykułu

Autorzy i Afiliacje

Idzi Gajderowicz

Abstrakt

The paper addresses the problem of the automatic distortion removal from images acquired with non-metric SLR camera equipped with prime lenses. From the photogrammetric point of view the following question arises: is the accuracy of distortion control data provided by the manufacturer for a certain lens model (not item) sufficient in order to achieve demanded accuracy? In order to obtain the reliable answer to the aforementioned problem the two kinds of tests were carried out for three lens models. Firstly the multi-variant camera calibration was conducted using the software providing full accuracy analysis. Secondly the accuracy analysis using check points took place. The check points were measured in the images resampled based on estimated distortion model or in distortion-free images simply acquired in the automatic distortion removal mode. The extensive conclusions regarding application of each calibration approach in practice are given. Finally the rules of applying automatic distortion removal in photogrammetric measurements are suggested
Przejdź do artykułu

Autorzy i Afiliacje

Jakub Kolecki
Antoni Rzonca

Abstrakt

Slope deformations, i.e., all types of landslides of rock masses (flow, creep, fall down, etc.), caused by gravitational forces, are the most widespread implementation of geological hazards and a negative geomorphological phenomenon that threatens the security of the population, destroy all utility values of the affected regions, negatively affects the environment, and cause considerable economic damage. Nowadays, the Global Navigation Satellite Systems (GNSS) provide accurate data for precise observations around the world due to the growing number of satellites from multiple operators, as well as more powerful and advanced technologies and the implementation of mathematical and physical models more accurately describing systematic errors that degrade GNSS observations such as ionospheric, tropospheric, and relativistic effects or multipath. The correct combination of measurement methods provides even more precise, i.e., better measurement results or estimates of unknown parameters. The combination of measurement procedures and their significant evaluations represent the essential attribute of deformation monitoring of landslides concerning the protection of the environment and the population’s safety in the interest areas for the sustainable development of human society. This article presents the establishment and use of a local geodetic network in particular local space for various needs. Depending upon the specific conditions, it is possible to use GNSS technology to obtain accurate observations and achieve the results applicable to the deformation survey for subsequent processing of the adjustment procedure.
Przejdź do artykułu

Autorzy i Afiliacje

Gabriel Weiss
1
ORCID: ORCID
Slavomir Labant
1
ORCID: ORCID
Juraj Gasinec
1
ORCID: ORCID
Hana Stankova
2
ORCID: ORCID
Pavel Cernota
2
ORCID: ORCID
Erik Weiss
3
ORCID: ORCID
Roland Weiss
3
ORCID: ORCID

  1. Technical University of Kosice, Kosice, Slovakia
  2. VSB – Technical University of Ostrava, Ostrava, Czech Republic
  3. University of Economics in Bratislava, Bratislava, Slovakia

Abstrakt

Instytucja awarii wspólnej jest najstarszą instytucją prawa morskiego. Jej użyteczność we współczesnych stosunkach żeglugowych poddawana jest od dawna krytyce. Tym niemniej awaria wspólna pomimo tego, że nie jest przedmiotem żadnej umowy międzynarodowej zajmuje poczesne miejsce w wewnętrznych systemach prawnych państw morskich, a społeczność międzynarodowa wykazuje nią nadal duże zainteresowanie, zmieniając regularnie zasady jej rozliczania ustalone w drugiej połowie XIX w. w Yorku i Antwerpii. Podczas prac nad projektem nowego polskiego Kodeksu morskiego Komisja Kodyfikacyjna Prawa Morskiego dokonała pewnych zmian w uregulowaniach dotyczących awarii wspólnej, dostosowując przepisy polskiego prawa do nowych rozwiązań proponowanych przez uczestników morskiego obrotu międzynarodowego i organizacji pozarządowych, w tym Międzynarodowego Komitetu Morskiego.

Przejdź do artykułu

Autorzy i Afiliacje

Cezary Łuczywek
ORCID: ORCID

Abstrakt

The article introduces a method for selecting the best clamping conditions to obtain vibration reduction during the milling of large-size workpieces. It is based on experimental modal analysis performed for a set of assumed, fixing conditions of a considered workpiece to identify frequency response functions (FRFs) for each tightening torque of the mounting screws. Simulated plots of periodically changing nominal cutting forces are then calculated. Subsequently, by multiplying FRF and spectra of cutting forces, a clamping selection function (CSF) is determined, and, thanks to this function, vibration root mean square (RMS) is calculated resulting in the clamping selection indicator (CSI) that indicates the best clamping of the workpiece. The effectiveness of the method was evidenced by assessing the RMS value of the vibration level observed in the time domain during the real-time face milling process of a large-sized exemplary item. The proposed approach may be useful for seeking the best conditions for fixing the workpiece on the table.
Przejdź do artykułu

Autorzy i Afiliacje

Krzysztof J. Kaliński
1
ORCID: ORCID
Marek A. Galewski
1
ORCID: ORCID
Natalia Stawicka-Morawska
1
ORCID: ORCID
Krzysztof Jemielniak
2
ORCID: ORCID
Michał R. Mazur
1
ORCID: ORCID

  1. Gdansk University of Technology, Faculty of Mechanical Engineering and Ship Technology, Institute of Mechanics and Machine Design,Gdansk, 80-233, Poland
  2. Warsaw University of Technology, Faculty of Mechanical and Industrial Engineering, Institute of Manufacturing Processes,Warsaw, 00-661, Poland

Abstrakt

The purpose of the article is to verify a hypothesis about the asymmetric pass-through of crude oil prices to the selling prices of refinery products (unleaded 95 petrol and diesel oil). The distribution chain is considered at three levels: the European wholesale market, the domestic wholesale market and the domestic retail market. The error correction model with threshold cointegration proved to be an appropriate tool for making an empirical analysis based on the Polish data. As found, price transmission asymmetry in the fuel market is significant and its scale varies depending on the level of distribution. The only exception is the wholesale price transmission to the domestic refinery price. All conclusions are supported by the cumulative response functions. The analysis sheds new light on the price-setting processes in an imperfectly competitive fuel market of a medium-sized, non-oil producing European country in transition.

Przejdź do artykułu

Autorzy i Afiliacje

Katarzyna Leszkiewicz-Kędzior
Aleksander Welfe

Abstrakt

Because of the value of time, investors are interested in obtaining economic benefits rather early and at a highest return. But some investing opportunities, e.g. mineral projects, require from an investor to freeze their capital for several years. In exchange for this, they expect adequate remuneration for waiting, uncertainty and possible opportunities lost. This compensation is reflected in the level of interest rate they demand. Commonly used approach of project evaluation – the discounted cash flow analysis – uses this interest rate to determine present value of future cash flows. Mining investors should worry about project’s cash flows with greater assiduousness – especially about those arising in first years of the project lifetime. Having regard to the mining industry, this technique views a mineral deposit as complete production project where the base sources of uncertainty are future levels of economic-financial and technical parameters. Some of them are more risky than others – this paper tries to split apart and weigh their importance by the example of Polish hard coal projects at the feasibility study. The work has been performed with the sensitivity analysis of the internal rate of return. Calculations were made using the ‘bare bones’ assumption (on all the equity basis, constant money, after tax, flat price and constant operating costs), which creates a good reference and starting point for comparing other investment alternatives and for future investigations. The first part introduces with the discounting issue; in the following sections the paper presents data and methods used for spinning off risk components from the feasibility-stage discount rate and, in the end, some recommendations are presented.

Przejdź do artykułu

Autorzy i Afiliacje

Piotr W. Saługa

Abstrakt

The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS observations.
Przejdź do artykułu

Autorzy i Afiliacje

Roman Kadaj

Abstrakt

The article describes the process of creating 3D models of architectural objects on the basis of video images, which had been acquired by a Sony NEX-VG10E fixed focal length video camera. It was assumed, that based on video and Terrestrial Laser Scanning data it is possible to develop 3D models of architectural objects. The acquisition of video data was preceded by the calibration of video camera. The process of creating 3D models from video data involves the following steps: video frames selection for the orientation process, orientation of video frames using points with known coordinates from Terrestrial Laser Scanning (TLS), generating a TIN model using automatic matching methods. The above objects have been measured with an impulse laser scanner, Leica ScanStation 2. Created 3D models of architectural objects were compared with 3D models of the same objects for which the self-calibration bundle adjustment process was performed. In this order a PhotoModeler Software was used. In order to assess the accuracy of the developed 3D models of architectural objects, points with known coordinates from Terrestrial Laser Scanning were used. To assess the accuracy a shortest distance method was used. Analysis of the accuracy showed that 3D models generated from video images differ by about 0.06 ÷ 0.13 m compared to TLS data.
Przejdź do artykułu

Autorzy i Afiliacje

Paulina Deliś
Michał Kędzierski
Anna Fryśkowska
Michalina Wilińska

Ta strona wykorzystuje pliki 'cookies'. Więcej informacji